Tree Causal Regressor

class skgrf.tree.GRFTreeCausalRegressor(equalize_cluster_weights=False, sample_fraction=0.5, mtry=None, min_node_size=5, honesty=True, honesty_fraction=0.5, honesty_prune_leaves=True, alpha=0.05, imbalance_penalty=0, stabilize_splits=True, orthogonal_boosting=False, n_jobs=- 1, seed=42)[source]

GRF Tree Causal regression implementation for sci-kit learn.

Provides a sklearn tree causal regressor to the GRF C++ library using Cython.

Parameters
  • equalize_cluster_weights (bool) – Weight the samples such that clusters have equally weight. If False, larger clusters will have more weight. If True, the number of samples drawn from each cluster is equal to the size of the smallest cluster. If True, sample weights should not be passed on fitting.

  • sample_fraction (float) – Fraction of samples used in each tree. If ci_group_size > 1, the max allowed fraction is 0.5

  • mtry (int) – The number of features to split on each node. The default is sqrt(p) + 20 where p is the number of features.

  • min_node_size (int) – The minimum number of observations in each tree leaf.

  • honesty (bool) – Use honest splitting (subsample splitting).

  • honesty_fraction (float) – The fraction of data used for subsample splitting.

  • honesty_prune_leaves (bool) – Prune estimation sample tree such that no leaves are empty. If False, trees with empty leaves are skipped.

  • alpha (float) – The maximum imbalance of a split.

  • imbalance_penalty (float) – Penalty applied to imbalanced splits.

  • orthogonal_boosting (bool) – When y_hat or w_hat are None, they are estimated using boosted regression forests. (Not yet implemented)

  • stabilize_splits (bool) – Whether or not the instrument should be taken into account when determining the imbalance of a split.

  • n_jobs (int) – The number of threads. Default is number of CPU cores. Only used for target estimation.

  • seed (int) – Random seed value.

Variables
  • n_features_in_ (int) – The number of features (columns) from the fit input X.

  • grf_forest_ (dict) – The returned result object from calling C++ grf.

  • mtry_ (int) – The mtry value determined by validation.

  • outcome_index_ (int) – The index of the grf train matrix holding the outcomes.

  • samples_per_cluster_ (list) – The number of samples to train per cluster.

  • clusters_ (list) – The cluster labels determined from the fit input cluster.

  • n_clusters_ (int) – The number of unique cluster labels from the fit input cluster.

  • criterion (str) – The criterion used for splitting: mse

apply(X)

Calculate the index of the leaf for each sample.

Parameters

X (array2d) – training input features

decision_path(X)

Calculate the decision path through the tree for each sample.

Parameters

X (array2d) – training input features

fit(X, y, w, y_hat=None, w_hat=None, sample_weight=None, cluster=None)[source]

Fit the grf forest using training data.

Parameters
  • X (array2d) – training input features

  • y (array1d) – training input targets

  • w (array1d) – training input treatments

  • y_hat (array1d) – estimated expected target responses

  • w_hat (array1d) – estimated treatment propensities

  • sample_weight (array1d) – optional weights for input samples

  • cluster (array1d) – optional cluster assignments for input samples

classmethod from_forest(forest: GRFForestCausalRegressor, idx: int)[source]

Extract a tree from a forest.

Parameters
  • forest (GRFLocalLinearRegressor) – A trained GRFLocalLinearRegressor instance

  • idx (int) – The tree index from the forest to extract.

get_depth()

Calculate the maximum depth of the tree.

get_n_leaves()

Calculate the number of leaves of the tree.

get_params(deep=True)

Get parameters for this estimator.

Parameters

deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns

params – Parameter names mapped to their values.

Return type

dict

predict(X)

Predict regression target for X.

Parameters

X (array2d) – prediction input features

score(X, y, sample_weight=None)

Return the coefficient of determination \(R^2\) of the prediction.

The coefficient \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred) ** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0.

Parameters
  • X (array-like of shape (n_samples, n_features)) – Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.

  • y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True values for X.

  • sample_weight (array-like of shape (n_samples,), default=None) – Sample weights.

Returns

score\(R^2\) of self.predict(X) wrt. y.

Return type

float

Notes

The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score(). This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters

**params (dict) – Estimator parameters.

Returns

self – Estimator instance.

Return type

estimator instance