lightgbm

class LightGBMVectorRegressionModel(categorical_feature_names: Optional[Union[Sequence[str], str]] = None, random_state=42, num_leaves=31, max_depth=- 1, n_estimators=100, min_child_samples=20, importance_type='gain', **model_args)[source]

Bases: sensai.sklearn.sklearn_base.AbstractSkLearnMultipleOneDimVectorRegressionModel, sensai.sklearn.sklearn_base.FeatureImportanceProviderSkLearnRegressionMultipleOneDim

__init__(categorical_feature_names: Optional[Union[Sequence[str], str]] = None, random_state=42, num_leaves=31, max_depth=- 1, n_estimators=100, min_child_samples=20, importance_type='gain', **model_args)
Parameters
  • categorical_feature_names – sequence of feature names in the input data that are categorical or a single string containing a regex matching the categorical feature names. Columns that have dtype ‘category’ (as will be the case for categorical columns created via FeatureGenerators) need not be specified (will be inferred automatically). In general, passing categorical features is preferable to using one-hot encoding, for example.

  • random_state – the random seed to use

  • num_leaves – the maximum number of leaves in one tree (original lightgbm default is 31)

  • max_depth – maximum tree depth for base learners, <=0 means no limit

  • n_estimators – number of boosted trees to fit

  • min_child_samples – minimum number of data needed in a child (leaf)

  • importance_type – the type of feature importance to be set in the respective property of the wrapped model. If ‘split’, result contains numbers of times the feature is used in a model. If ‘gain’, result contains total gains of splits which use the feature.

  • model_args – see https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.LGBMRegressor.html

class LightGBMVectorClassificationModel(categorical_feature_names: Optional[Union[Sequence[str], str]] = None, random_state=42, num_leaves=31, max_depth=- 1, n_estimators=100, min_child_samples=20, importance_type='gain', use_balanced_class_weights=False, **model_args)[source]

Bases: sensai.sklearn.sklearn_base.AbstractSkLearnVectorClassificationModel, sensai.sklearn.sklearn_base.FeatureImportanceProviderSkLearnClassification

__init__(categorical_feature_names: Optional[Union[Sequence[str], str]] = None, random_state=42, num_leaves=31, max_depth=- 1, n_estimators=100, min_child_samples=20, importance_type='gain', use_balanced_class_weights=False, **model_args)
Parameters
  • categorical_feature_names – sequence of feature names in the input data that are categorical or a single string containing a regex matching the categorical feature names. Columns that have dtype ‘category’ (as will be the case for categorical columns created via FeatureGenerators) need not be specified (will be inferred automatically). In general, passing categorical features may be preferable to using one-hot encoding, for example.

  • random_state – the random seed to use

  • num_leaves – the maximum number of leaves in one tree (original lightgbm default is 31)

  • max_depth – maximum tree depth for base learners, <=0 means no limit

  • n_estimators – number of boosted trees to fit

  • min_child_samples – minimum number of data needed in a child (leaf)

  • importance_type – the type of feature importance to be set in the respective property of the wrapped model. If ‘split’, result contains numbers of times the feature is used in a model. If ‘gain’, result contains total gains of splits which use the feature.

  • use_balanced_class_weights – whether to compute class weights from the training data that is given and pass it on to the classifier’s fit method; weighted data points may not be supported for all types of models

  • model_args – see https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.LGBMClassifier.html?highlight=LGBMClassifier