Who Is Liable when AI Kills?

  • 📰 sciam
  • ⏱ Reading Time:
  • 69 sec. here
  • 3 min. at publisher
  • 📊 Quality Score:
  • News: 31%
  • Publisher: 63%

Technology Technology Headlines News

Technology Technology Latest News,Technology Technology Headlines

We need to change rules and institutions, while still promoting innovation, to protect people from faulty AI

A California jury may soon have to decide. In December 2019, a person driving a Tesla with an artificial intelligence driving system killed two people in Gardena in an accident. The Tesla driver faces several years in prison. In light of this and other incidents, both the National Highway Transportation Safety Administration and National Transportation Safety Board are investigating Tesla crashes, and NHTSA has recently broadened its probe to explore how drivers interact with Tesla systems.

Getting the liability landscape right is essential to unlocking AI’s potential. Uncertain rules and potentially costly litigation will discourage investment in, and development and adoption of, AI systems. The wider adoption of AI in health care, autonomous vehicles and in other industries depends on the framework that determines who, if anyone, ends up liable for an injury caused by artificial intelligence systems.

Yet, liability too often focuses on the easiest target: the end-user who uses the algorithm. Liability inquiries often start—and end—with the driver of the car that crashed or the physician that gave faulty treatment decision. The key is to ensure that all stakeholders—users, developers and everyone else along the chain from product development to use—bear enough liability to ensure AI safety and effectiveness—but not so much that they give up on AI.

Second, some AI errors should be litigated in special courts with expertise adjudicating AI cases. These specialized tribunals could develop an expertise in particular technologies or issues, such as dealing with the interaction of two AI systems . Such specialized courts are not new: for example, in the U.S., specialist courts have protected childhood vaccine manufacturers for decades by adjudicating vaccine injuries and developing a deep knowledge of the field.

 

Thank you for your comment. Your comment will be published after being reviewed.
Please try again later.

SciAm! Got any suggestions? It WILL happen you know. Hopefully the regulations in place by then will help with accountability and compensation 😌

The company

The Kaylon from TheOrville, clearly. SethMacFarlane

those who program the code are still human so they would be held at fault

In the airplane/airline pilot licensing the FAA is responsible for overseeing there is a 'truth'. The PIC, pilot in command, is always identified and ultimately responsible for the safety of any flight or operation of the aircraft. I suspect push come to shove the same holds.

If I shut a gun - I am guilty, but if the gun shuts itself? 🤔

We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

 /  🏆 300. in TECHNOLOGY

Technology Technology Latest News, Technology Technology Headlines

Similar News:You can also read news stories similar to this one that we have collected from other news sources.

Sort your entire photo library with this AIWith Excire Foto 2022, you can tidy up your image database with just a few clicks of the mouse.
Source: PopSci - 🏆 298. / 63 Read more »

Tesla lays off nearly 200 Autopilot employees who help train the company’s AILabelling data is essential for developing many AI systems It’s called ‘at will’ American labor laws Well Tesla has never cared about safety so why is this a surprise? “All human input is error” - Elon Musk, 2022
Source: verge - 🏆 94. / 67 Read more »

This AI Tool Could Predict the Next Coronavirus VariantThe model, which uses machine learning to track the fitness of different viral strains, accurately predicted the rise of Omicron’s BA.2 subvariant and the Alpha variant schrarstzhaupt Now, if it could predict my kid's grades, I will be impressed. I think it is just 'helping to predict.'
Source: sciam - 🏆 300. / 63 Read more »

Using Simulations Of Alleged Ethics Violations To Ardently And Legally Nail Those Biased AI Ethics Transgressors Amid Fully Autonomous SystemsA new means to detect AI Ethics violations consists of simulating an AI system to try and detect or predict that AI Ethics violations could arise. This could be used for example in the case of AI-based self-driving cars.
Source: ForbesTech - 🏆 318. / 59 Read more »

Tesla lays off nearly 200 Autopilot employees who help train the company’s AILabelling data is essential for developing many AI systems It’s called ‘at will’ American labor laws Well Tesla has never cared about safety so why is this a surprise? “All human input is error” - Elon Musk, 2022
Source: verge - 🏆 94. / 67 Read more »