LightGBM

LightGBM

LightGBM

Light GBM is a gradient boosting framework that uses tree based learning algorithm.

How it differs from other tree based algorithm?

Light GBM grows tree vertically while other algorithm grows trees horizontally meaning that Light GBM grows tree leaf-wise while other algorithm grows level-wise. It will choose the leaf with max delta loss to grow. When growing the same leaf, Leaf-wise algorithm can reduce more loss than a level-wise algorithm.

Below diagrams explain the implementation of LightGBM and other boosting algorithms.


Why Light GBM is gaining extreme popularity?

The size of data is increasing day by day and it is becoming difficult for traditional data science algorithms to give faster results. Light GBM is prefixed as ‘Light’ because of its high speed. Light GBM can handle the large size of data and takes lower memory to run. Another reason of why Light GBM is popular is because it focuses on accuracy of results. LGBM also supports GPU learning and thus data scientists are widely using LGBM for data science application development.

Can we use Light GBM everywhere?

No, it is not advisable to use LGBM on small datasets. Light GBM is sensitive to overfitting and can easily overfit small data. Their is no threshold on the number of rows but my experience suggests me to use it only for data with 10,000+ rows.

We briefly discussed the concept of Light GBM, now what about it’s implementation?

Implementation of Light GBM is easy, the only complicated thing is parameter tuning. Light GBM covers more than 100 parameters but don’t worry, you don’t need to learn all.

It is very important for an implementer to know atleast some basic parameters of Light GBM. If you carefully go through following parameters of LGBM, I bet you will find this powerful algorithm a piece of cake.

Let’s start discussing parameters.

Parameters

Control Parameters

  • max_depth: It describes the maximum depth of tree. This parameter is used to handle model overfitting. Any time you feel that your model is overfitted, my first advice will be to lower max_depth.
  • min_data_in_leaf: It is the minimum number of the records a leaf may have. The default value is 20, optimum value. It is also used to deal over fitting
  • feature_fraction: Used when your boosting(discussed later) is random forest. 0.8 feature fraction means LightGBM will select 80% of parameters randomly in each iteration for building trees.
  • bagging_fraction: specifies the fraction of data to be used for each iteration and is generally used to speed up the training and avoid overfitting.
  • early_stopping_round: This parameter can help you speed up your analysis. Model will stop training if one metric of one validation data doesn’t improve in last early_stopping_round rounds. This will reduce excessive iterations.
  • lambda: lambda specifies regularization. Typical value ranges from 0 to 1.
  • min_gain_to_split: This parameter will describe the minimum gain to make a split. It can used to control number of useful splits in tree.
  • max_cat_group: When the number of category is large, finding the split point on it is easily over-fitting. So LightGBM merges them into ‘max_cat_group’ groups, and finds the split points on the group boundaries, default:64

Core Parameters

Task: It specifies the task you want to perform on data. It may be either train or predict.

application: This is the most important parameter and specifies the application of your model, whether it is a regression problem or classification problem. LightGBM will by default consider model as a regression model.

  1. regression: for regression
  2. binary: for binary classification
  3. multiclass: for multiclass classification problem
boosting: defines the type of algorithm you want to run, default=gdbt

  • gbdt: traditional Gradient Boosting Decision Tree
  • rf: random forest
  • dart: Dropouts meet Multiple Additive Regression Trees
  • goss: Gradient-based One-Side Sampling

num_boost_round: Number of boosting iterations, typically 100+

learning_rate: This determines the impact of each tree on the final outcome. GBM works by starting with an initial estimate which is updated using the output of each tree. The learning parameter controls the magnitude of this change in the estimates. Typical values: 0.1, 0.001, 0.003…

num_leaves: number of leaves in full tree, default: 31

device: default: cpu, can also pass gpu

Metric parameter

metric: again one of the important parameter as it specifies loss for model building. Below are few general losses for regression and classification.

  • mae: mean absolute error
  • mse: mean squared error
  • binary_logloss: loss for binary classification
  • multi_logloss: loss for multi classification

IO parameter

  • max_bin: it denotes the maximum number of bin that feature value will bucket in.
  • categorical_feature: It denotes the index of categorical features. If categorical_features=0,1,2 then column 0, column 1 and column 2 are categorical variables.
  • ignore_column: same as categorical_features just instead of considering specific columns as categorical, it will completely ignore them.
  • save_binary: If you are really dealing with the memory size of your data file then specify this parameter as ‘True’. Specifying parameter true will save the dataset to binary file, this binary file will speed your data reading time for the next time.

Knowing and using above parameters will definitely help you implement the model. Remember I said that implementation of LightGBM is easy but parameter tuning is difficult.

Parameter Tuning:

Data scientists always struggle in deciding when to use which parameter? and what should be the ideal value of that parameter?

Following set of practices can be used to improve your model efficiency.

  • num_leaves: This is the main parameter to control the complexity of the tree model. Ideally, the value of num_leaves should be less than or equal to 2^(max_depth). Value more than this will result in overfitting.
  • min_data_in_leaf: Setting it to a large value can avoid growing too deep a tree, but may cause under-fitting. In practice, setting it to hundreds or thousands is enough for a large dataset.
  • max_depth: You also can use max_depth to limit the tree depth explicitly.

For Faster Speed:

  • Use bagging by setting bagging_fraction and bagging_freq
  • Use feature sub-sampling by setting feature_fraction
  • Use small max_bin
  • Use save_binary to speed up data loading in future learning
  • Use parallel learning, refer to parallel learning guide.

For better accuracy:

  • Use large max_bin (may be slower)
  • Use small learning_rate with large num_iterations
  • Use large num_leaves(may cause over-fitting)
  • Use bigger training data
  • Try dart
  • Try to use categorical feature directly

To deal with over-fitting:

  • Use small max_bin
  • Use small num_leaves
  • Use min_data_in_leaf and min_sum_hessian_in_leaf
  • Use bagging by set bagging_fraction and bagging_freq
  • Use feature sub-sampling by set feature_fraction
  • Use bigger training data
  • Try lambda_l1, lambda_l2 and min_gain_to_split to regularization
  • Try max_depth to avoid growing deep tree.
***Thank You***

0 Response to "LightGBM"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel