Our world does not stand still, and technological progress is gaining momentum. At first glance, all inventions were created for our convenience. But are all of them beneficial to us?
Let’s talk about the invention of the century – self-driving cars. Self-driving cars are increasingly becoming participants in the news.
They already participated in fatal accidents, escaped from the police chase, and saved the life. How can an algorithm decide on human lives? Who will be responsible if the decision was made mistakenly?
The Key Advantages of Self-Driving Cars
Today, many large companies are engaged in the creating of such kinds of cars. They are Google, General Motors, Volkswagen, Audi, BMW, Volvo, and others.
Such an active interest of people in self-driving cars is explained by many advantages of self-driving cars. The benefits of self-driving cars are:
- Meaningfully reduce the cost of transportation due to lack of drivers;
- Fuel savings through centralized management;
- Saving the personal time of the driver;
- The ability to drive cars for people with vision problems and disabilities;
- The decrease in the number of accidents;
- Increased road capacity.
Many agree that such benefits are significant. Especially when people with disabilities are given the opportunity to use a car or when it comes to reducing accidents. Unfortunately, no one has canceled the shortcomings of such a technology.
The Main Threat of Self-Driving Cars
Like any technology, AI self-driving cars with all their advantages have several disadvantages. And in this case, these shortcomings affect the most important thing. This is the safety of human life.
The disadvantages of self-driving cars include:
- Difficulties in determining liability for damage;
- The issue of software reliability;
- Lack of driving experience in critical situations;
- Loss of jobs by people whose work involves driving a vehicle;
- Privacy issue.
How Do Self-Driving Cars Work: The Main Points
These cars are equipped with fairly advanced hardware-technology and software. Radars, cameras, sensors, computer vision for assessing the environment and making decisions take part in their functioning.
This is the way of how self-driving cars work. Such cars can make decisions much faster than drivers.
That is, this means that the car can make emergency braking faster if the pedestrian jumps onto the road at night. But this does not mean that the machine will make decisions just like a person.
The machine only imitates human behavior only superficially. That is, artificial intelligence is provided with many situations on the road and a plan of acts and behavior of the machine. The algorithm will know what to do, for example, if a small child runs out to the middle of the street to get the ball.
Whether such algorithms will be good enough to work in emergencies, or whether a combination of other technologies will solve the problem is still unknown.
The development of sensors and machine education allows such cars to avoid obstacles on the roads and pedestrians, but they will not allow cars to decide whose life is worth saving. This is the main problem.
Human VS Artificial Intelligence
People tend to admit their mistakes. Human memory can fail, confuse facts, slowly process information, and physical and mental reactions slow down more and more over time.
As for artificial intelligence algorithms, they never get outdated, forget nothing and process information instantly.
But people can make decisions based on insufficient data, using common sense, cultural and ethical values. It is a vital fact that people can explain the logic for making these decisions.
As for artificial intelligence algorithms, they cannot be held responsible for their decisions and cannot act as defendants in court for their mistakes.
This limitation prevents them from taking responsibility for decisions when it comes to life and death.
Who is Responsible for Damage Caused by Self-Driving Cars
Who is responsible, when self-driving cars killed a pedestrian, not because of an algorithm error, but precisely because his system worked flawlessly? The machine cannot accept responsibility for its actions, even if it could explain them.
If the responsibility is shifted to developers of algorithms, then company representatives will have to appear in court every time an autopilot leads to human death.
Such measures will slow down progress in machine learning because no developer can 100% guarantee the security of their algorithm. It will be easier for companies to curtail these areas of development.
It is not logical to prosecute the owner of the machine. After all, it does not affect the decisions that the machine makes. This approach can lead to the fact that humankind will refuse from the future of self-driving cars.
Maybe this dilemma will go far into the past, as soon as self-driving cars become the norm. However, there are still too many unresolved issues that may affect the technological process.
Such machines are two sides of the same coin. On the one hand, everything is smooth, but on the other, are self-driving cars safe?
Charles Ebert is a mechanic with nine years of experience. He knows all the easy mods that improve a car’s performance and if you are looking for the car reviews, tests, information and buying guides you may visit one of his sites such as autoexpertguides.com