Browsing by Author "Zackey, Matthew David"
Now showing 1 - 1 of 1
Results Per Page
Sort Options
- ItemModern gradient boosting(Stellenbosch : Stellenbosch University, 2024-03) Zackey, Matthew David; Uys, Daniel Wilhelm; Steel, Sarel Johannes; Stellenbosch University. Faculty of Economic and Management Sciences. Dept. of Statistics and Actuarial Science.ENGLISH SUMMARY: Boosting is a supervised learning procedure that has gained considerable interest in statistical and machine learning owing to its powerful predictive performance. The idea of boosting is to obtain a model ensemble by sequentially fitting base learners to modified versions of the training data. The first complete boosting procedure was Adaptive boosting (AdaBoost), designed for binary classification. Gradient boosting followed AdaBoost, which allowed boosting to be applied to any differentiable and continuous loss function. The most frequently used version of gradient boosting is Multiple Additive Regression Trees (MART), where trees are specified as the base learners. In the last several years, there have been numerous extensions to MART, aiming to improve its predictive performance and scalability. Extreme Gradient Boosting (XGBoost), Light Gradient Boosting Machine (LightGBM) and Categorical Boosting (CatBoost) are three of these extensions, which in this thesis are termed the modern gradient boosting methods. The thesis introduces boosting by reviewing the details of AdaBoost, forward stagewise additive modelling (FSAM) and gradient boosting. Notably, the equivalence of AdaBoost and FSAM with the exponential loss is proven, FSAM for regression with trees is considered and the need for an efficient procedure like gradient boosting is emphasised. Additionally, two derivations of gradient boosting are provided. The first considers gradient boosting as an approximation to steepest descent of the empirical risk, while the second views gradient boosting as taking a quadratic approximation of FSAM. Since trees are a popular choice of base learner in gradient boosting, details will be given on MART. The remainder of the thesis studies the modern methods, focusing on the mathematical details of their novelties. Examples, illustrations, and simulations are given for some of these novelties to provide further clarity. Additionally, empirical studies investigating the generalisation performance of certain novelties are presented. More specifically, these empirical studies consider the performance of XGBoost’s regularisation parameters in tree-building, GOSS from LightGBM, the Plain and Ordered modes in CatBoost, and the cosine similarity to construct the trees in CatBoost. In these experiments, several binary classification datasets are considered with varying characteristics: size, class imbalance, sparsity and the inclusion of categorical features.