The Gradient Boosters VI(A): Natural Gradient

We are taking a brief detour from the series to understand what Natural Gradient is. The next algorithm we examine in the Gradient Boosting world is NGBoost and to understand it completely, we need to understand what Natural Gradients are. Pre-reads: I would be talking about KL Divergence and if you are unfamiliar with the … Continue reading The Gradient Boosters VI(A): Natural Gradient

The Gradient Boosters II: Regularized Greedy Forest

In 2011, Rie Johnson and Tong Zhang, proposed a modification to the Gradient Boosting model. they called it Regularized Greedy Forest. When they came up with the modification, GBDTs were already, sort of, ruling the tabular world. They tested the new modification of a wide variety of datasets, both synthetic and real world, and found … Continue reading The Gradient Boosters II: Regularized Greedy Forest

The Gradient Boosters I: The Good Old Gradient Boosting

In 2001, Jerome H. Friedman wrote up a seminal paper - Greedy function approximation: A gradient boosting machine. Little did he know that was going to evolve into a class of methods which threatens Wolpert's No Free Lunch theorem in the tabular world. Gradient Boosting and its cousins(XGBoost and LightGBM) have conquered the world by … Continue reading The Gradient Boosters I: The Good Old Gradient Boosting

Interpretability: Cracking open the black box – Part III

Previously, we looked at the pitfalls with the default "feature importance" in tree based models, talked about permutation importance, LOOC importance, and Partial Dependence Plots. Now let's switch lanes and look at a few model agnostic techniques which takes a bottom-up way of explaining predictions. Instead of looking at the model and trying to come … Continue reading Interpretability: Cracking open the black box – Part III