Should self-driving cars include black box recorders?

Should self-driving cars include black box recorders? ...

Every commercial airplane has a black box that records everything that happens in the airplanes' systems, as well as the pilots' actions, and these records have been invaluable in identifying the causes of problems.

Why shouldn't self-driving cars and robots have the same thing? It's not a hypothetical question.

Eleven people were killed in the collisions involving Tesla cars equipped with the AutoPilot system, which allows almost hands-free driving. One of them was struck by a Tesla while changing a tire on the side of the road.

Despite this, every car company is increasing its automated driving capabilities. For example, Walmart is partnering with Ford and Argo AI to test self-driving cars for home deliveries, and Lyft is partnering up with the same companies to test a broad spectrum of robotaxis.

Read: Independent Audits as Governing AI Safety

But self-driving autonomous vehicles outperform cars, trucks, and robot welders on factory floors. Japanese nursing homes use care-bots to deliver meals, monitor patients, and even provide companionship. Walmart and other stores userobots to mop floors.(What could go wrong?)

More everyday interactions with autonomous systems may present additional hazards. A global group of academic researchers in robotics and artificial intelligence as well as industry developers, insurers, and government officials has published a set of governance recommendations to better anticipate problems and increase accountability. One of the group's core ideas is a black box for any autonomous system.

According to Gregory Falco, an assistant professor of civil and systems engineering at Johns Hopkins University and a researcher at the StanfordFreeman Spogli Institute for International Studies, problems go wrong in the immediate future. This approach would help assess risks in advance and establish an audit trail to understand failures. The primary goal is to increase accountability.

The new recommendations, published inNature Machine Intelligence, are based on three key areas: preparing prospective risk assessments before putting a system to work; maintaining a record of activity, including the black box to record accidents when they occur; and promoting compliance with local and national laws.

The authors do not advocate for government regulations. Instead, they argue that key stakeholders such as insurers, courts, and customers have a strong interest in urging companies to adopt their strategy. (One of the co-authors is an executive with Swiss Re, the huge re-insurer.) Customers, of course, want to avoid unnecessary dangers.

Companies are already developing self-driving cars black boxes, in part because the National Transportation Safety Board has alerted manufacturers about the data it will need to investigate accidents. Falco and a colleague have devised a one-of-a-kind black box for that industry.

The dangers of drones today extend far beyond cars. If a recreational drone slices through a power line and kills someone, it would not currently have a black box to find out what happened. The same would be true for a robo-mower that runs amok. Medical devices that use artificial intelligence, the authors argue, need to record time-stamped information on everything that happens while theyre in use.

The authors argue that companies should be required to disclose both their black box data and the information obtained through human interviews publicly. This would allow other companies to make crowdsourced safety improvements that would be compatible with their own systems.

Even relatively inexpensive consumer goods, such as robo-mowers, may and should have black box recorders, according to Falco. At every stage of a product's development and evolution, organizations and industries must conduct risk assessment.

Someone must provide information for all the things that may go wrong when you have an autonomous agent acting in an open environment, according to the author. What weve done is provide people with a roadmap for how to think about the dangers and for establishing a data trail to conduct postmortems.

Edmund L. Andrews is a contributing writer for the Stanford Institute for Human-Centered AI.

This article originally appeared on Hai.stanford.edu. Copyright 2022

You may also like: