We have come a long way in the world of Gradient Boosting. If you have followed the whole series, you should have a much better understanding about the theory and practical aspects of the major algorithms in this space. After a grim walk through the math and theory behind these algorithms, I thought it would … Continue reading The Gradient Boosters VII: Battle of the Boosters
The Gradient Boosters III: XGBoost
Now let's get the elephant out of the way - XGBoost. This is the most popular cousin in the Gradient Boosting Family. XGBoost with its blazing fast implementation stormed into the scene and almost unanimously turned the tables in its favor. Soon enough, Gradient Boosting, via XGBoost, was the reigning king in Kaggle Competitions and … Continue reading The Gradient Boosters III: XGBoost
The Gradient Boosters I: The Good Old Gradient Boosting
In 2001, Jerome H. Friedman wrote up a seminal paper - Greedy function approximation: A gradient boosting machine. Little did he know that was going to evolve into a class of methods which threatens Wolpert's No Free Lunch theorem in the tabular world. Gradient Boosting and its cousins(XGBoost and LightGBM) have conquered the world by … Continue reading The Gradient Boosters I: The Good Old Gradient Boosting