[Feature] XGBoost native multiquantile regression#3056
[Feature] XGBoost native multiquantile regression#3056dennisbader merged 18 commits intounit8co:masterfrom
Conversation
|
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #3056 +/- ##
==========================================
- Coverage 95.78% 95.72% -0.07%
==========================================
Files 158 158
Lines 17290 17293 +3
==========================================
- Hits 16562 16554 -8
- Misses 728 739 +11 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
dennisbader
left a comment
There was a problem hiding this comment.
Thanks a lot @ozink-u8, it looks great already :)
I added some minor comments, after that it's ready to be merged 🚀
| pred_sub_model = sub_model.predict(dummy_feats)[0] | ||
| np.testing.assert_array_equal(pred_sub_model, pred_j[i]) | ||
|
|
||
| @pytest.mark.skipif(not XGB_AVAILABLE, reason="XGBoost required for this test") |
There was a problem hiding this comment.
This test is pretty much the same as the one for catboost above. In general we try to avoid code duplication as much as we can :)
We coud cover both models in the existing test.
You can achieve this adding another level of parametrization to test function decorators:
@pytest.mark.skipif(
not XGB_AVAILABLE and not CB_AVAILABLE,
reason="XGBoost or CatBoost required for this test"
)
@pytest.mark.parametrize(
"model_cls,model_kwargs",
(
([(CatBoostModel, cb_test_params)] if CB_AVAILABLE else [])
+ ([(XGBModel, xgb_test_params)] if XGB_AVAILABLE else [])
)
)
Then you have to slightly adapt the code below to use the new parameters and setup the model dynamically.
| assert likelihood.type == LikelihoodType.MultiQuantile | ||
| assert likelihood.quantiles == [0.1, 0.3, 0.5, 0.7, 0.9] | ||
|
|
||
| @pytest.mark.skipif(not XGB_AVAILABLE, reason="XGBoost required for this test") |
There was a problem hiding this comment.
the same here as above, let's cover both models in the same test via parametrization
Co-authored-by: Dennis Bader <dennis.bader@gmx.ch>
Co-authored-by: Dennis Bader <dennis.bader@gmx.ch>
Description wrongly stated that one model is fit for all quantiles before Co-authored-by: Dennis Bader <dennis.bader@gmx.ch>
Co-authored-by: Dennis Bader <dennis.bader@gmx.ch>
…to feat/xgb-multiquantile
dennisbader
left a comment
There was a problem hiding this comment.
Looks great now, thanks a lot for the updates @ozink-u8 🚀
I pushed some minor changes, now everything is good to go 💯
Checklist before merging this PR:
Fixes #3047
Summary
Analogously to #3032 adds multiquantile regression to XGBoost.
Other Information
Change Summary (Claude Code)
darts/models/forecasting/xgboost.py: acceptlikelihood="multiquantile", setobjective="reg:quantileerror"andquantile_alpha=<list>, update docstringsdarts/utils/likelihood_models/sklearn.py: normalize quantiles tofloatto avoidXGBoostErrorfromnp.float64serialization. It seemed like XGBoost had some for of casting but the sort() would create a list[np.float64] instead of a np.ndarray so their check for a np.ndarray would not catch it.darts/tests/models/forecasting/test_sklearn_models.py: addtest_model_construction_multiquantile_xgbandtest_get_estimator_multiquantile_xgbanalogously to [Feature] CatBoost MultiQuantile support #3032examples/20-SKLearnModel-examples.ipynb: mentionXGBModelalongsideCatBoostModelin the multiquantile tip analogously to [Feature] CatBoost MultiQuantile support #3032darts/tests/explainability/test_shap_explainer.py: fix deprecated pandas freq"d"→"D"(coincidentally also analogously to [Feature] CatBoost MultiQuantile support #3032)