Must Health AI be Explainable if it is Reliable?

Sune Hannibal Holm

Open online seminar with Sune Hannibal Holm, Department of Food and Resource Economics.

About the seminar

Impressively accurate machine learning are being developed for clinical decision-support. A widespread concern is that the output of these algorithms e.g. diagnostic classifications, treatment suggestions, and risk scores cannot be explained to the relevant users. In this talk I discuss why explanations should be required if the algorithm has been tested to be reliable. I suggest that explanations are likely requiredwhen a black box AI tool undertakes tasks that users cannot perform or validate themselves. If the user can verify outputs manually, documented reliability and accuracy may suffice, but explainability can still add value when outputs are uncertain or errors occur

How to participate

The seminar is open to all.
The seminar will take place online via Zoom