Summary of Kiss. “The Danger of Using Artificial Intelligence in Development of Autonomous Vehicles”

TO: Prof. Ellis

From: Kevin Andiappen

DATE: Sept. 20, 2020

Subject: 500-word Summary

This is a 500-word summary of the article “THE DANGER OF USING ARTIFICIAL INTELLIGENCE IN DEVELOPMENT OF AUTONOMOUS VEHICLES,” by Gabor Kiss, which discusses the risks that come from having Artificial intelligence in automobiles.

Although self-driven cars have recently become popular, the idea has been around for years. A car that would one day be fully autonomous eliminating the need for a driver. Technology could succeed where humans fail. According to Kiss, “The expectation of spreading self-driven cars lies in the hope of significantly decreasing the 1,3 million death toll accidents world-wide, which are caused by human factor 90 % of the time” (Kiss, 2019, p. 717). In other words, the goal of self-driving cars is to decrease the number of car accidents caused by human error. This is because artificial intelligence can process data quicker than humans, which will decrease the reaction time in a situation.

At the end of November 2018, Tesla cars traveled a total of one billion miles in autonomous mode. Statistics show one accident occurs every 3 million miles. The department of transportation says there is an accident every 492,000 miles in America making self-driving cars seven times safer. The society of automotive engineers created a scale for determining the intelligence and capabilities of a vehicle. It goes from 0 to 5.

NVIDA is a company that incorporates deep learning for AI. With this technology, cars can create a lifelike, detailed interactive world to do fast calculations within seconds. There is no 100% safe solution for self-driving cars. However, using AI will come close to achieving this because it will be able to respond to traffic situations much faster than humans will. However, drivers may abuse it by cutting in front of cars intentionally forcing it to brake or going in front of them at highway entrances.

If you were to change a road closed sign to speed limit is 50 mph sign, the AI may not be able to tell which sign is legitimate which can cause an accident. This can happen to a human driver and an AI. Digital light technology works like a projector. It can shine on the road to project symbols and/or lanes. This can be used to deceive a self-driving car to follow the fake lane and cause it to crash or go to another location.

In conclusion, Artificial intelligence is a challenge for developers because it requires them to prepare for every possible scenario. The safety precautions used in self-driving cars to prevent accidents could be reprogrammed to cause accidents. All of the scenarios mentioned are one of many possible dangers that can come from self-driving cars. Developers need to be aware of these situations so that they can properly educate the AI.


Kiss, G. (2019). The Danger of Using Artificial Intelligence in Development of Autonomous Vehicles. Interdisciplinary Description of Complex Systems17(4), 716–722.

Leave a Reply