Boosting is an ensemble learning method, where a collection of weak learners are trained sequentially rather than in parallel, as in the case of Bagging. Thus the learners are dependent on one another. The idea of boosting is to improve the learner’s performance by using information gained at each previous one. Typically, errors are reduced at each step. The model performance typically improves with an increase in the number of trees. Boosting algorithms are performed in sequence. Thus, they are not conducive to parallel computing.
Additionally, boosting algorithms train on the entire dataset and tend to optimize for the dataset. This invariably leads to overfitting. This has to be addressed by reducing the tree depths (weak learners).
When used on imbalanced datasets, boosting algorithms typically add higher weight to any missed minority classes (errors) when training the next learner. This helps with improved model performance.
Related: AdaBoost Sources: Title Unavailable | Site Unreachable