Lightgbm regressor python примеры

lightgbm.LGBMRegressor

class lightgbm. LGBMRegressor ( boosting_type = ‘gbdt’ , num_leaves = 31 , max_depth = -1 , learning_rate = 0.1 , n_estimators = 100 , subsample_for_bin = 200000 , objective = None , class_weight = None , min_split_gain = 0.0 , min_child_weight = 0.001 , min_child_samples = 20 , subsample = 1.0 , subsample_freq = 0 , colsample_bytree = 1.0 , reg_alpha = 0.0 , reg_lambda = 0.0 , random_state = None , n_jobs = None , importance_type = ‘split’ , ** kwargs ) [source] 

__init__ ( boosting_type = ‘gbdt’ , num_leaves = 31 , max_depth = -1 , learning_rate = 0.1 , n_estimators = 100 , subsample_for_bin = 200000 , objective = None , class_weight = None , min_split_gain = 0.0 , min_child_weight = 0.001 , min_child_samples = 20 , subsample = 1.0 , subsample_freq = 0 , colsample_bytree = 1.0 , reg_alpha = 0.0 , reg_lambda = 0.0 , random_state = None , n_jobs = None , importance_type = ‘split’ , ** kwargs ) 

Construct a gradient boosting model.

  • boosting_type (str,optional(default=’gbdt’)) – ‘gbdt’, traditional Gradient Boosting Decision Tree. ‘dart’, Dropouts meet Multiple Additive Regression Trees. ‘rf’, Random Forest.
  • num_leaves (int,optional(default=31)) – Maximum tree leaves for base learners.
  • max_depth (int,optional(default=-1)) – Maximum tree depth for base learners,
  • learning_rate (float,optional(default=0.1)) – Boosting learning rate. You can use callbacks parameter of fit method to shrink/adapt learning rate in training using reset_parameter callback. Note, that this will ignore the learning_rate argument in training.
  • n_estimators (int,optional(default=100)) – Number of boosted trees to fit.
  • subsample_for_bin (int,optional(default=200000)) – Number of samples for constructing bins.
  • objective (str,callableorNone,optional(default=None)) – Specify the learning task and the corresponding learning objective or a custom objective function to be used (see note below). Default: ‘regression’ for LGBMRegressor, ‘binary’ or ‘multiclass’ for LGBMClassifier, ‘lambdarank’ for LGBMRanker.
  • class_weight (dict,‘balanced’orNone,optional(default=None)) – Weights associated with classes in the form . Use this parameter only for multi-class classification task; for binary classification task you may use is_unbalance or scale_pos_weight parameters. Note, that the usage of all these parameters will result in poor estimates of the individual class probabilities. You may want to consider performing probability calibration (https://scikit-learn.org/stable/modules/calibration.html) of your model. The ‘balanced’ mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)) . If None, all classes are supposed to have weight one. Note, that these weights will be multiplied with sample_weight (passed through the fit method) if sample_weight is specified.
  • min_split_gain (float,optional(default=0.)) – Minimum loss reduction required to make a further partition on a leaf node of the tree.
  • min_child_weight (float,optional(default=1e-3)) – Minimum sum of instance weight (Hessian) needed in a child (leaf).
  • min_child_samples (int,optional(default=20)) – Minimum number of data needed in a child (leaf).
  • subsample (float,optional(default=1.)) – Subsample ratio of the training instance.
  • subsample_freq (int,optional(default=0)) – Frequency of subsample,
  • colsample_bytree (float,optional(default=1.)) – Subsample ratio of columns when constructing each tree.
  • reg_alpha (float,optional(default=0.)) – L1 regularization term on weights.
  • reg_lambda (float,optional(default=0.)) – L2 regularization term on weights.
  • random_state (int,RandomState objectorNone,optional(default=None)) – Random number seed. If int, this number is used to seed the C++ code. If RandomState object (numpy), a random integer is picked based on its state to seed the C++ code. If None, default seeds in C++ code are used.
  • n_jobs (intorNone,optional(default=None)) – Number of parallel threads to use for training (can be changed at prediction time by passing it as an extra keyword argument). For better performance, it is recommended to set this to the number of physical cores in the CPU. Negative integers are interpreted as following joblib’s formula (n_cpus + 1 + n_jobs), just like scikit-learn (so e.g. -1 means using all threads). A value of zero corresponds the default number of threads configured for OpenMP in the system. A value of None (the default) corresponds to using the number of physical cores in the system (its correct detection requires either the joblib or the psutil util libraries to be installed).

A custom objective function can be provided for the objective parameter. In this case, it should have the signature objective(y_true, y_pred) -> grad, hess , objective(y_true, y_pred, weight) -> grad, hess or objective(y_true, y_pred, weight, group) -> grad, hess :

y_true numpy 1-D array of shape = [n_samples]

The target values.

y_pred numpy 1-D array of shape = [n_samples] or numpy 2-D array of shape = [n_samples, n_classes] (for multi-class task)

The predicted values. Predicted values are returned before any transformation, e.g. they are raw margin instead of probability of positive class for binary task.

weight numpy 1-D array of shape = [n_samples]

The weight of samples. Weights should be non-negative.

group numpy 1-D array

Group/query data. Only used in the learning-to-rank task. sum(group) = n_samples. For example, if you have a 100-document dataset with group = [10, 20, 40, 10, 10, 10] , that means that you have 6 groups, where the first 10 records are in the first group, records 11-30 are in the second group, records 31-70 are in the third group, etc.

grad numpy 1-D array of shape = [n_samples] or numpy 2-D array of shape = [n_samples, n_classes] (for multi-class task)

The value of the first order derivative (gradient) of the loss with respect to the elements of y_pred for each sample point.

hess numpy 1-D array of shape = [n_samples] or numpy 2-D array of shape = [n_samples, n_classes] (for multi-class task)

The value of the second order derivative (Hessian) of the loss with respect to the elements of y_pred for each sample point.

For multi-class task, y_pred is a numpy 2-D array of shape = [n_samples, n_classes], and grad and hess should be returned in the same format.

Construct a gradient boosting model.

fit (X, y[, sample_weight, init_score, . ])

Источник

DataTechNotes

LightGBM is an open-source gradient boosting framework that based on tree learning algorithm and designed to process data faster and provide better accuracy. It can handle large datasets with lower memory usage and supports distributed learning. You can find all the information about the API in this link.

LightGBM can be used for regression, classification, ranking and other machine learning tasks. In this tutorial, you’ll briefly learn how to fit and predict regression data by using LightGBM in Python. The tutorial covers:

  1. Preparing the data
  2. Building the model
  3. Prediction and accuracy check
  4. Visualizing the results
  5. Source code listing
import lightgbm as lgb from sklearn.datasets import load_boston from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split
from pandas import DataFrame
import matplotlib.pyplot as plt

If you’ve not installed LightGBM yet, you can install it via pip in Python.

We use Boston Housing Price dataset as a target regression data and we can easily load it from sklearn.datasets module. To keep the feature column names, I’ll use pandas DataFrame type for feature data. Then, we’ll splint data into train and test parts.

boston = load_boston() x, y = boston.data, boston.target
x_df = DataFrame(x, columns= boston.feature_names) x_train, x_test, y_train, y_test = train_test_split(x_df, y, test_size=0.15) 

First, we’ll define regression model parameters as shown below. You can change values according to your evaluation targets.

# defining parameters params = < 'task': 'train', 'boosting': 'gbdt', 'objective': 'regression', 'num_leaves': 10, 'learnnig_rage': 0.05, 'metric': 'l2','l1'>, 'verbose': -1 >

Next, we’ll load the train and test data into the LightGBM dataset object. Below code shows how to load train and evaluation test data.

# laoding data lgb_train = lgb.Dataset(x_train, y_train) lgb_eval = lgb.Dataset(x_test, y_test, reference=lgb_train)
# fitting the model model = lgb.train(params, train_set=lgb_train, valid_sets=lgb_eval, early_stopping_rounds=30) 

After training the model, we can predict test data and check prediction accuracy. We’ll find the MSE and RMSE metrics of trained model.

# prediction y_pred = model.predict(x_test) # accuracy check mse = mean_squared_error(y_test, y_pred) rmse = mse**(0.5) print("MSE: %.2f" % mse) print("RMSE: %.2f" % rmse) 

To visualize the original and predicted data, we can use ‘matplotlib’ library. Below code shows how to plot original and predicted data in a graph.

# visualizing in a plot x_ax = range(len(y_test)) plt.figure(figsize=(12, 6)) plt.plot(x_ax, y_test, label="original") plt.plot(x_ax, y_pred, label="predicted") plt.title("Boston dataset test and predicted data") plt.xlabel('X') plt.ylabel('Price') plt.legend(loc='best',fancybox=True, shadow=True) plt.grid(True) plt.show() 

LightGBM provides plot_importance() method to plot feature importance. Below code shows how to plot it.

# plotting feature importance lgb.plot_importance(model, height=.5)

In this tutorial, we’ve briefly learned how to fit and predict regression data by using LightGBM regression method in Python. The full source code is listed below.

import lightgbm as lgb from sklearn.datasets import load_boston from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt from pandas import DataFrame boston = load_boston() x, y = boston.data, boston.target x_df = DataFrame(x, columns= boston.feature_names) x_train, x_test, y_train, y_test = train_test_split(x_df, y, test_size=0.15) # defining parameters params = < 'task': 'train', 'boosting': 'gbdt', 'objective': 'regression', 'num_leaves': 10, 'learnnig_rage': 0.05, 'metric': 'l2','l1'>, 'verbose': -1 > # laoding data lgb_train = lgb.Dataset(x_train, y_train) lgb_eval = lgb.Dataset(x_test, y_test, reference=lgb_train) # fitting the model model = lgb.train(params, train_set=lgb_train, valid_sets=lgb_eval, early_stopping_rounds=30) # prediction y_pred = model.predict(x_test) # accuracy check mse = mean_squared_error(y_test, y_pred) rmse = mse**(0.5) print("MSE: %.2f" % mse) print("RMSE: %.2f" % rmse) # visualizing in a plot x_ax = range(len(y_test)) plt.figure(figsize=(12, 6)) plt.plot(x_ax, y_test, label="original") plt.plot(x_ax, y_pred, label="predicted") plt.title("Boston dataset test and predicted data") plt.xlabel('X') plt.ylabel('Price') plt.legend(loc='best',fancybox=True, shadow=True) plt.grid(True) plt.show() # plotting feature importance lgb.plot_importance(model, height=.5) 

Источник

Читайте также:  Php вывод ошибки mysql pdo
Оцените статью