Gradient Boosting Neural Networks: GrowNet

AI can be found in every aspect of our lives, from email spam filtering and Autonomous Vehicles to security and medical diagnostics. Particularly deep learning has been …

March 14, 2022 5 minute
Gradient Boosting Neural Networks: GrowNet,Machine Learning (ML), Artificial Neural Network (ANN)

Description :

AI can be found in every aspect of our lives, from email spam filtering and Autonomous Vehicles to security and medical diagnostics. Particularly deep learning has been one of the key innovations that have greatly impacted exponential growth in various scientific fields.

Although deep learning has proved both in theory and in action to have limitless capabilities, making a deep neural network for new application areas, remains challenging due to its complex nature and also lack of any clear paradigm to create a new customized DNN.

So, designing an architecture for an intended purpose requires proficiency and often a lot of luck!

Inspired by the Gradient Boosting algorithm, creators of this model have introduced a novel approach that builds neural networks from the ground up layer by layer.

Gradient Boosting algorithm incrementally builds up sophisticated models out of simpler components or so-called “Weak Learners”, that can successfully be applied to the most complex learning tasks.

Popular GBDT frameworks like XGBoost, LightGBM, and CatBoost use decision trees as weak learners and combine them using a gradient boosting framework to build complex models.

Despite the decision tree algorithm’s usefulness in lots of cases, it is not versatile and applicable to all kinds of tasks, in particular, when it comes to structured data, where the deep neural network model outperforms the decision tree algorithm.

“We combine the power of gradient boosting with the flexibility and versatility of neural networks and introduce a new modeling paradigm called GrowNet that can build up a DNN layer by layer. Instead of decision trees, we use shallow neural networks as our weak learners in a general gradient boosting framework that can be applied to a wide variety of tasks spanning classification, regression and ranking” authors of GrowNet paper claimed. They added: “We introduce further innovations like adding second order statistics to the training process, and also including a global corrective step that has been shown, both in theory [31] and in empirical evaluation, to provide performance lift and precise fine-tuning to the specific task at hand.”

Although weak learners, like decision trees, are popular in boosting and ensemble methods, there have been substantial works done on combining neural nets with boosting/ensemble methods for better performance over single large/deep neural networks.

For the full paper, you can visit: https://arxiv.org/abs/2002.07971