The ethics of self driving cars is a complicated issue as many many variables go into just one trip. There is always a chance for an accident especially since humans and robots are driving on the same roads. With this topic there are many small and large issues that need to be dealt with. One of them being what should the car do in the event of a potential crash. Another being how does society adjust to this potential job loss as theses driverless cars and trucks continue to be rolled out. Humans make many mistakes so this may seem like a good idea at first because many crashes are because of human error. A big player in decision making is emotion where you are angry or sad that will play a part in how you operate a motor vehicle. With that being said a robot does not have emotion so the action of others would not effect its judgement.
Solution 1:
The two cannot coexist on the same roads(For Now).
With this approach we would slowly phase in driverless cars and phase out human drivers. all together as it is difficult for AI to learn human behavior and anticipate mistakes that we make such as not coming to a complete stop at a stop sign or changing lanes without warning. Todays vehicles are becoming more integrated with safety systems such as automatic braking, lane defeature warnings, park assist, and self driving modes. Todays auto manufacture’s are laying the foundation for the future of driverless cars so the idea is very possible and very real. One key system that would help bring this together is a communications system that allows autonomous vehicles to communicate with each other. This will help for situations such as city and highway driving as a vehicle would be allowed to signal to another car that they are trying to get over or exit a highway. This could be implemented with human and autonomous vehicles as both cars could be fitted with a system that allows the autonomous car to communicate with a car driven by a human. There are flaws with this idea as factors such as pride and emotion come into play . It will take a lot of time and people have to get accustomed to the idea of a computer controlling their vehicle.
Solution 2:
We stop developing them because public opinion will never be positive
How driverless cars are viewed by the public eye will be a major concern and how they are viewed will ultimately decide if they are going to be successful or not. If the people don’t want it then there is no reason to make it. Many people don’t like the idea of a robot controlling their vehicle. Automation scares a lot of people because they fear that it will replace their jobs and the company that they will work for will kick them to the side
Solution 3
Leave the ethical decision up to the consumer.
(Assuming we are well off in the future and self driving cars are implemented everywhere)
Make the decision up to the customer if you want the car to protect you and look out for your best interest in a crash. It does sound crazy but I sure am not going to buy a product that does not give me the ability to have a say in what the car should do in the event of an accident. That decision should always be up to the customer because nobody wants a vehicle that could potentially kill them if it decided that was the best course of action.