“By far the greatest danger of artificial intelligence is that people conclude too early that they understand it.”Elizer Yudkowsky
Stephen Hawking is unarguably one of the greatest minds of the 21st century. In an interview given to the BBC in 2014, he commented that the development of full Artificial Intelligence (AI) could spell the end of the human race. In case you thought the Terminator movies were based on science fiction, it is time to think again about the dangers of artificial intelligence.
Artificial Intelligence has been one of the most debated topics in the past decade. Algorithms and machine learning programs have allowed it to be incorporated into modern devices to some extent. The development of ‘smart’ gadgets that sense internal as well as external factors to adjust their processes accordingly are a stepping stone to the development of full-fledged AI.
The pace at which AI is being pursued has put a lot of questions in the minds of people in different fields. To what extent do we want to take it? Can we actually build machines that can replace human cognitive thinking? What will the invention of such programs mean for us? All these queries collectively bring out the dangers of artificial intelligence that are being covered up.
In this article, we analyze what AI is, what it entails and what are the threats that it poses today or in the future.
What is Artificial Intelligence?
For most people, AI is something that resembles ‘Ultron’ from the Marvel movie Avengers. Although the idea is to develop a greater program that resembles the technology depicted in the movie, for now, the AI we have is very different. It is known as ‘narrow’ or ‘weak’ AI and it only encompasses a few basic tasks.
The voice assistants we have on our phones such as Siri, Alexa and Google’s Bixby are the leading examples. Google’s algorithms working behind their search engines and the systems that power self driving cars are all examples of the ‘narrow’ AI that is in everyday use these days. This technology plays a crucial role in improving the lives we lead today.
Google had stunned the world in a presentation for its voice assistant when it was demonstrated making human-like phone calls to parents and businesses for appointments. Over the years, AI is becoming more and more powerful with self-driving cars being the pinnacle of what they can achieve. However, this is also where the dangers of AI begin to come through.
AI is still not at a point where it can make decisions that a sane human being could. Specifically, the programs don’t have emotional capabilities that can help make split-second decisions. A classic scenario for self-driving cars is that if a pedestrian was to wander on to the road at a crossing while others were still waiting for the signal to go green, how would the car decide what to do? The people behind AI have failed to provide a satisfactory answer in this regard.
This is one of the many issues that shed light on the limitations and potential dangers of artificial intelligence.
Why is AI Safety Important?
AI is slowly becoming powerful. Scientists and engineers are developing methods to create algorithms that observe scenarios and then adapt themselves to similar situations in the future. This is something that we witness on a small scale when browsing online. Websites like YouTube observe viewing history to make appropriate suggestions for what you may be interested in looking at next.
Even with the narrow AI that we have these days, there are a number of issues that are emerging like the evident lack of cognitive decision making. AI is expected to be implemented in areas that include economics, law, information technology, research and a lot more besides. The programs will be dealing with aspects like verification, validity, security and control. Since it is going to have such widespread incorporation, safety is an important thing to consider.
Safety research provides evidence towards the dangers of artificial intelligence. It provides substance to what would otherwise be simple claims with no sound foundation with regards to the potential harm that AI could cause. At the end of the day, AI is still a program that can be modified, programmed or tweaked to perform any action that the team behind it deems fit.
Like all computer programs, AI also has the vulnerability to be hacked. Considering that AI will run everything from cars, airplanes, automated trading systems, government portals, pacemakers and a lot more besides, in the event of a mishap, the scale of destruction that emerges will be unprecedented, This is why AI safety is a crucial topic and needs to be researched extensively before we can take it forward.
How can Artificial Intelligence be Dangerous?
The dangers of artificial intelligence have been exposed on various fronts. One interesting thing about the issues with AI is that there are a number of movies that provide classic examples of what would happen if it went wrong. ‘Eagle Eye’ starring Shia Lebaeof is a depiction of an advanced AI system that assigns itself a task of putting a new government in place. It may seem hypothetical and unrealistic but this is one of the challenges that we could witness with systems of higher intelligence.
1. If AI is Configured to Cause Damages on Different Levels
Artificial Intelligence may seem like an autonomous program but at the backend, there is someone managing and handling it to ensure proper performance. Moreover, AI is programmed in a specific way. There are two ways in which AI can be made to go astray. The system can either suffer a hack or the person with access to it can intentionally or unintentionally make it perform in a detrimental manner. Cybersecurity is a huge issue today and if large organizations like Facebook can’t protect themselves from server breaches, what chances will we have with AI and its implementation.
2. In Case the Program Takes a Negative Approach Towards a Positive Outcome
Artificial Intelligence is designed to be autonomous and as close to human thinking as possible. The role of advanced AI may even be to surpass humans through the integration of computing in the algorithm. This may seem like an amazing initiative but like humans, AI may take a negative approach towards a positive outcome. Hollywood movies like Avengers; Age of Ultron and Eagle Eye try to lend a practical perspective to the idea and a system capable of this much autonomy will ultimately be able to make decisions as it deems fit.
3. Privacy Invasion
Data has become the most valuable currency these days. Even with narrow or no AI, organizations keep records of user information to help target them more effectively. Some platforms can go beyond ethical boundaries in collecting data on customers. Government authorities and Internet Service Providers do this on a regular basis. Nowadays, cybercriminals are also taking advantage of a number of vulnerabilities. It only begs the question that with AI in place, where will its moral compass lie and to what extent it will go to collect data and track people in the name of surveillance.
4. Manipulation Through Social Media
The power of social media in manipulation of public opinion came to light during the 2016 US presidential elections and the UK Brexit referendum. Although the allegations are still to be proven, it demonstrates the potential that these platforms and the algorithms that they use hold over swaying what the public thinks. Companies like Facebook and Google know who the users are, where they are located and practically everything else about them. This gives them unprecedented power in determining appropriate ways to target them and influence the views that they hold about an issue. AI on its own can identify people
5. Autonomous Warfare
The Artificial Intelligence that humans want to develop is designed to control each and every aspect of our lives. This includes warfare for an obvious reason. Wars are a messy business and the loss that surpasses everything else are the lives lost in the process. The future of warfare looks something like the scenes from Iron Man movies.
However, this is where things get confusing. If we are trusting machines and Artificial Intelligence to decide and fight wars for us, how do we make sure that they will make the right decision? Technology may be able to replicate human cognitive thinking but that is not the only thing our brain is capable of. The question of morals and ethical boundaries comes into play the most during a war. The biggest question here is how is AI supposed to make these decisions on our behalf and how do we know they are right?
Advanced AI is still a thing of the distant future because nothing of the sort seems possible anywhere in the coming years. Despite this, judging from what narrow AI that is in place today can do, it seems like the fears expressed by some of the biggest minds of our generation will come true if AI were to ever take over. Will the future of the world look like a scene from a Terminator movie? We can only wait to find out.