Tree Quantile Regressor

class skgrf.tree.GRFTreeQuantileRegressor(quantiles=None, regression_splitting=False, equalize_cluster_weights=False, sample_fraction=0.5, mtry=None, min_node_size=5, honesty=True, honesty_fraction=0.5, honesty_prune_leaves=True, alpha=0.05, imbalance_penalty=0, seed=42)[source]

GRF Tree Quantile Regression implementation for sci-kit learn.

Provides a sklearn tree quantile regressor interface to the GRF C++ library using Cython.

Warning

Because the training dataset is required for prediction, the training dataset is recorded onto the estimator instance. This means that serializing this estimator will result in a file at least as large as the serialized training dataset.

Parameters
  • quantiles (list(float)) – A list of quantiles on which to predict.

  • regression_splitting (bool) – Use regression splits instead of splitting specially for quantiles.

  • equalize_cluster_weights (bool) – Weight the samples such that clusters have equally weight. If False, larger clusters will have more weight. If True, the number of samples drawn from each cluster is equal to the size of the smallest cluster. If True, sample weights should not be passed on fitting.

  • sample_fraction (float) – Fraction of samples used in each tree.

  • mtry (int) – The number of features to split on each node. The default is sqrt(p) + 20 where p is the number of features.

  • min_node_size (int) – The minimum number of observations in each tree leaf.

  • honesty (bool) – Use honest splitting (subsample splitting).

  • honesty_fraction (float) – The fraction of data used for subsample splitting.

  • honesty_prune_leaves (bool) – Prune estimation sample tree such that no leaves are empty. If False, trees with empty leaves are skipped.

  • alpha (float) – The maximum imbalance of a split.

  • imbalance_penalty (float) – Penalty applied to imbalanced splits.

  • seed (int) – Random seed value.

Variables
  • n_features_in_ (int) – The number of features (columns) from the fit input X.

  • grf_forest_ (dict) – The returned result object from calling C++ grf.

  • mtry_ (int) – The mtry value determined by validation.

  • outcome_index_ (int) – The index of the grf train matrix holding the outcomes.

  • samples_per_cluster_ (list) – The number of samples to train per cluster.

  • clusters_ (list) – The cluster labels determined from the fit input cluster.

  • n_clusters_ (int) – The number of unique cluster labels from the fit input cluster.

  • train_ (array2d) – The X,y concatenated train matrix passed to grf.

  • criterion (str) – The criterion used for splitting: gini

apply(X)

Calculate the index of the leaf for each sample.

Parameters

X (array2d) – training input features

decision_path(X)

Calculate the decision path through the tree for each sample.

Parameters

X (array2d) – training input features

fit(X, y, cluster=None)[source]

Fit the grf tree quantile regressor using training data.

Parameters
  • X (array2d) – training input features

  • y (array1d) – training input targets

  • cluster (array1d) – optional cluster assignments for input samples

classmethod from_forest(forest: GRFForestQuantileRegressor, idx: int)[source]

Extract a tree from a forest.

Parameters
  • forest (GRFForestQuantileRegressor) – A trained GRFQuantileRegressor instance

  • idx (int) – The tree index from the forest to extract.

get_depth()

Calculate the maximum depth of the tree.

get_n_leaves()

Calculate the number of leaves of the tree.

get_params(deep=True)

Get parameters for this estimator.

Parameters

deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns

params – Parameter names mapped to their values.

Return type

dict

predict(X)[source]

Predict quantile regression target(s) for X.

Parameters

X (array2d) – prediction input features

score(X, y, sample_weight=None)

Return the coefficient of determination \(R^2\) of the prediction.

The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0.

Parameters
  • X (array-like of shape (n_samples, n_features)) – Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.

  • y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True values for X.

  • sample_weight (array-like of shape (n_samples,), default=None) – Sample weights.

Returns

score\(R^2\) of self.predict(X) wrt. y.

Return type

float

Notes

The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score(). This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters

**params (dict) – Estimator parameters.

Returns

self – Estimator instance.

Return type

estimator instance