Self-driving or autonomous cars are coming to the streets whether it is what the consumers want or not. For many people, having a computer system take them from one place to another without having to drive themselves is a welcome change. For me, I enjoy driving so I do not look forward to the possibility of losing the ability to drive from one location to another. Although, that is not my argument against autonomous cars. My biggest concern about self-driving cars is over the maintenance of both the computer system and the functions of the cars themselves. Currently cars require maintenance on engines, breaks, drivetrains, and other systems. Although the engine maintenance would be somewhat different due to hybrids and electric motors, maintenance would still need to occur. Then there is the problem with putting a computer system at the wheel; that would need maintenance too. My question is what happens when the computer system malfunctions. Currently, when a system fails, such as brakes or the engine, a driver can pull of the side of the road and generally avoid a problem. If the computer system driving the car has similar issues, does it have the capabilities to avoid a problem? This is not much of a concern if the car has a system where the individual can take control manually if something does go wrong. Although, GM recently announced the plan to mass-produce an autonomous car with no pedals or steering wheel, reported by this NPR article (
GM Says Car With No Steering Wheel Or Pedals Ready For Streets In 2019). This means that the individuals in the car have no ability to take control of the manually if the computer system decides to glitch. The companies making these cars to drive themselves need to make sure to create a computer system that does not start having issues like our phones and laptops do after a few years. Being that these systems are much more advanced, that should be the case.
Another question to ask about self-driving cars is are they programmed to handle any unforeseen issues like downed trees, animals in the road, freak weather or even quick lane changes from cars being manually driven, especially as self-driving technology mixes with manual drivers on the road. Human drivers tend to make gut decision while driving and sometimes things go bad. Sometimes those decisions do not work, but many times in a situation where a crash is unavoidable and a driver has to make a decision, the decision made is the less severe option. The question then in a self-driving car is how has the technology been programmed to make these decisions? Now this dives into another argument of artificial intelligence, but it is an important question to ask. One option is to program the car to make the decision to kill or hurt the fewest amount of possible (
Self-driving cars programmed to decide who dies in a crash). This USA Today article discusses some of the hypotheticals that go along with this issue. The problem is that they are just hypotheticals, although the article argues that it is time to discuss the hypothetical situations a self-driving car may encounter because the technology is close to being mass-produced. Much of the self-driving technology is still in the development phase and issues described above are rare. Is that though because there are only a few manufacturers that have advanced autonomous features on their cars, like Tesla, while most drivers still drive themselves. Unfortunately, this question probably cannot be answered until self-driving cars become mass-produced and there are many cars on the road that experience these unforeseen issues and data comes out to answer it. Much of the concern over self-driving cars is a trust issue; consumers do not trust a computer over themselves in an adverse situation. Until there is data to show drivers that self-driving cars are safer than they are as drivers, many will continue to drive themselves. Whether one is against self-driving cars or for them, they are going to make their way to the consumers regardless of what people want.