On the practical, ethical, and legal necessity of clinical Artificial Intelligence explainability: an examination of key arguments

Abstract The necessity for explainability of artificial intelligence technologies in medical applications has been widely discussed and heavily debated within the literature.This paper comprises a systematized review of the arguments supporting and opposing this purported necessity.Both sides of the debate within the literature are quoted to synthesize discourse on common recurring themes and subsequently critically analyze and respond to it.

While the use of autonomous black box algorithms is compellingly discouraged, the same cannot be said for the whole of medical artificial iphone 14 price chicago intelligence technologies that lack explainability.We contribute novel comparisons of unexplainable clinical artificial intelligence tools, diagnosis of idiopathy, and diagnoses by exclusion, to analyze implications on patient autonomy and informed consent.Applying a novel approach using comparisons with clinical practice guidelines, we contest the claim that lack of explainability compromises clinician due diligence and undermines epistemological click here responsibility.

We find it problematic that many arguments in favour of the practical, ethical, or legal necessity of clinical artificial intelligence explainability conflate the use of unexplainable AI with automated decision making, or equate the use of clinical artificial intelligence with the exclusive use of clinical artificial intelligence.

Leave a Reply

Your email address will not be published. Required fields are marked *