Prologue Now before writing about this topic, I did a quick Google Search to see how much of this is already covered and quickly observed a phenomenon that I see increasingly in the field - Data Science = Modelling, at best, Modelling + Data Processing. Open a MOOC, they talk about the different models and … Continue reading Practical Debugging for Data Science
Blog Feed
Interpretability: Cracking open the black box – Part III
Previously, we looked at the pitfalls with the default "feature importance" in tree based models, talked about permutation importance, LOOC importance, and Partial Dependence Plots. Now let's switch lanes and look at a few model agnostic techniques which takes a bottom-up way of explaining predictions. Instead of looking at the model and trying to come … Continue reading Interpretability: Cracking open the black box – Part III
Interpretability: Cracking open the black box – Part II
In the last post in the series, we defined what interpretability is and looked at a few interpretable models and the quirks and 'gotchas' in it. Now let's dig deeper into the post-hoc interpretation techniques which is useful when you model itself is not transparent. This resonates with most real world use cases, because whether … Continue reading Interpretability: Cracking open the black box – Part II
Interpretability: Cracking open the black box – Part I
Interpretability is the degree to which a human can understand the cause of a decision - Miller, Tim[1] Explainable AI (XAI) is a sub-field of AI which has been gaining ground in the recent past. And as I machine learning practitioner dealing with customers day in and day out, I can see why. I've been … Continue reading Interpretability: Cracking open the black box – Part I
