top of page

Explainability and Visibility into Covid-19 X-Ray Classifiers by Deep Learning

Keywords: Deep Learning, Grad-CAM, X-Ray, Covid-19, HealthShare, IRIS


Purpose


Over the Easter Weekend I touched on some deep learning classifier for Covid-19 Lungs.  The demo result seems fine, seemingly matching some academic research publications around that time on this topic. But is it really "fine "? 

Recently I happened to listen to an online lunch webinar on "Explainability  in Machine Learning", and Don talked about this classification result at the end of his talk:





The above figure is also presented in this research paper: “Why Should I Trust You?” Explaining the Predictions of Any Classifier .   We can see that the classifier was actually trained to take the background pixels,  e.g. snow etc wild environment, as main inputs to classify whether it's a pet dog or a wild wolf. 


That  invokes my old interests, and surely stirs up a bit curiosity for now:

  • How could we "look into" these Covid-19 classifiers, normally presented as "black boxes" , to see what pixels actually contributed to a "Covid-19 Lung" result?

  • What's the simplest approach in its simplest form or simplest tool that we could leverage in this case? 




7 views0 comments
bottom of page