Self-Driving Trucks and the Internet of Things

There have been a number of articles online over the last couple of years that talk about self-driving cars. About a year ago, in May 2015, Google told us that their self-driving cars had driven 1.7 million miles, and only been involved in 11 auto accidents. If it is true that at least eight people per day are killed in distracted driving-related auto accidents, then 11 accidents in total is pretty impressive.

Around the same time as Google’s reveal, the first self-driving Freightliner truck hit the road. (CNN.com even has a video of the truck driver using his iPad while the truck is in motion.) Before we had even wrapped our heads around the idea of an autonomous vehicle, trucks were being tested. Large-scale commercial trucks – 18-wheelers, big rigs, semis, tractor-trailers – were poised to begin their treks across the highways of America with little to no human interaction.

This was, in some ways, the realization of the Internet of Things, a process whereby our machines communicate with one another via data gathering and cloud storage. The thermostat you can change with your smartphone, the refrigerator that tells you when you are low on milk, and now the commercial trucks that can drive themselves – these are the technologies of the future, and they are awe-inspiring.

Until something goes wrong, that is. Then they are a potential nightmare.

Parsing liability in the event of an accident

Say what you want about limiting driver fatigue, or having a truck that, perhaps, can rebalance itself while moving, thus avoiding any debris spills or even tip-overs. What we want to know is, when the computers fail or a car cuts the truck off, and the inevitable truck accident happens, what becomes of the victims? Who is responsible when an automated truck short-circuits, hits a passenger car, and causes the death of the driver or passenger(s)?

It could be the manufactures. It could be the coders. It could be the designers. It could be almost anyone (a logistical nightmare), and as usual, the victim is the one who suffers. Human drivers make a lot of errors, yes, but they have one thing that autonomous vehicles don’t – instincts. Those instincts are what protect drivers on the road every day, as we make split-second decisions that are based in our inherently human will to survive. An autonomous truck won’t have that benefit.

As an article on IEEE.org puts it:

“Today no court ever asks why a driver does anything in particular in the critical moments before a crash. The question is moot as to liability—the driver panicked, he wasn’t thinking, he acted on instinct. But when robots are doing the driving, ‘Why?’ becomes a valid question. Human ethical standards, imperfectly codified in law, make all kinds of assumptions that engineers have not yet dared to make. The most important such assumption is that a person of good judgment will know when to disregard the letter of the law in order to honor the spirit of the law. What engineers must now do is teach the elements of good judgment to cars and other self-guided machines—that is, to robots” (emphasis ours).

Technology is incredible, but all progress brings some dangers. Ignoring those dangers is beyond foolish, even in the name of progress. Our fear is that, at the end of the day, people who are injured when an electrical sensor fails – while in a tunnel, or on a highway, or literally anywhere on any road across America – will lose one of their fundamental rights: to seek justice in a court of law for the injuries and havoc wreaked upon them. We must pay attention to the ethical and practical implications of an autonomous vehicle in the event of a truck accident, and we must do it before those accidents occur.

Crandall & Pera Law is a premier personal injury law firm serving clients throughout Kentucky and Ohio. To schedule a free consultation with an experienced truck accident attorney, please call our Kentucky team at 877.651.7764, our Ohio team at 844-279-2889, or fill out our contact form.