Look Ma, No Hands!

Look+Ma%2C+No+Hands%21

For Tesla Model S owners, it was just the next software update – but it became something more. Self-driving cars are a long-lived dream of the automobile industry and society – present in science fiction since at least 1935, according to the Computer History Museum. Tesla just furthered that dream. With the innovative “Autopilot” program, drivers of Tesla electric cars can now let their vehicles navigate certain tasks alone, such as lane changes, speed control, collision avoidance, and parallel parking, as the official blog describes.

While many may assume that, “this is it. We have achieved the goal of an autonomous vehicle,” it is not entirely the case. In a press conference covered by Green Car Reports, CEO Elon Musk stated this is “Autopilot Version 1” – still in the testing and revision stages – and it lacks several key functions, such as recognition of stop signs and red lights. For now, drivers are greatly encouraged to keep their hands on the steering wheel at all times, because in the end, they are the ones with override control of the semi-autonomous Autopilot functions.

Despite these restrictions, the beta version of Autopilot is not the end of the road for Tesla. Musk has tweeted that more features will be forthcoming in version “1.01: curve speed adaption, controller smoothness, better lane holding on poor roads, improved fleet learning!” The Tesla hardware – sensors and controls within the cars themselves – was designed to allow the implementation of Autopilot, and the hope is that over a series of updates and possible additions, full autonomy will be achieved in three years or less.

Self-driving cars seem to be the future of the automobile industry, but a host of issues surface when considering their software. For instance, the vehicles do not “see” roads as human driver does. Instead, they use a variety of sensors to determine preprogrammed movements, bringing up a “trolley-problem” ethical dilemma. In the trolley problem, a train is speeding towards five people tied to the track. An outside figure may divert the trolley, but there is someone tied to the second track. The choices are to do nothing and let five people die, or willingly condemn one person.

The issue with self-driving cars, presented by a recent MIT study, is that there are occasions when swerving from pedestrians in the roadway would lead to the death of the car’s occupants. When surveyed, most people agreed with utilitarian software, minimizing the loss of life by killing the driver, but fewer believed such software would actually be implemented. After all, who wants to buy a car that could purposefully kill them someday? Ironically, human error causes most automobile accidents, and autonomous vehicles could reduce that toll – yet reluctance to purchase an automobile that would kill its owner might cancel out that benefit.

Musk has clearly stated Autopilot is not yet fully developed, but the technological and ethical issues self-driving technology causes must be brought up now because regulations will take longer to process than the technology will to be developed.