Skip to content

[Feature] XGBoost native multiquantile regression#3056

Merged
dennisbader merged 18 commits intounit8co:masterfrom
ozink-u8:feat/xgb-multiquantile
Apr 9, 2026
Merged

[Feature] XGBoost native multiquantile regression#3056
dennisbader merged 18 commits intounit8co:masterfrom
ozink-u8:feat/xgb-multiquantile

Conversation

@ozink-u8
Copy link
Copy Markdown
Contributor

@ozink-u8 ozink-u8 commented Apr 1, 2026

Checklist before merging this PR:

  • Mentioned all issues that this PR fixes or addresses.
  • Summarized the updates of this PR under Summary.
  • Added an entry under Unreleased in the Changelog.

Fixes #3047

Summary

Analogously to #3032 adds multiquantile regression to XGBoost.

Other Information

Change Summary (Claude Code)

  • darts/models/forecasting/xgboost.py: accept likelihood="multiquantile", set objective="reg:quantileerror" and quantile_alpha=<list>, update docstrings
  • darts/utils/likelihood_models/sklearn.py: normalize quantiles to float to avoid XGBoostError from np.float64 serialization. It seemed like XGBoost had some for of casting but the sort() would create a list[np.float64] instead of a np.ndarray so their check for a np.ndarray would not catch it.
  • darts/tests/models/forecasting/test_sklearn_models.py: add test_model_construction_multiquantile_xgb and test_get_estimator_multiquantile_xgb analogously to [Feature] CatBoost MultiQuantile support #3032
  • examples/20-SKLearnModel-examples.ipynb: mention XGBModel alongside CatBoostModel in the multiquantile tip analogously to [Feature] CatBoost MultiQuantile support #3032
  • darts/tests/explainability/test_shap_explainer.py: fix deprecated pandas freq "d""D" (coincidentally also analogously to [Feature] CatBoost MultiQuantile support #3032)

@ozink-u8 ozink-u8 requested a review from dennisbader as a code owner April 1, 2026 12:19
@review-notebook-app
Copy link
Copy Markdown

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@codecov
Copy link
Copy Markdown

codecov bot commented Apr 1, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 95.72%. Comparing base (9c62525) to head (a9cf234).
⚠️ Report is 1 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #3056      +/-   ##
==========================================
- Coverage   95.78%   95.72%   -0.07%     
==========================================
  Files         158      158              
  Lines       17290    17293       +3     
==========================================
- Hits        16562    16554       -8     
- Misses        728      739      +11     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Copy Markdown
Collaborator

@dennisbader dennisbader left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot @ozink-u8, it looks great already :)

I added some minor comments, after that it's ready to be merged 🚀

pred_sub_model = sub_model.predict(dummy_feats)[0]
np.testing.assert_array_equal(pred_sub_model, pred_j[i])

@pytest.mark.skipif(not XGB_AVAILABLE, reason="XGBoost required for this test")
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test is pretty much the same as the one for catboost above. In general we try to avoid code duplication as much as we can :)

We coud cover both models in the existing test.

You can achieve this adding another level of parametrization to test function decorators:

    @pytest.mark.skipif(
        not XGB_AVAILABLE and not CB_AVAILABLE,
        reason="XGBoost or CatBoost required for this test"
    )
    @pytest.mark.parametrize(
        "model_cls,model_kwargs",
        (
            ([(CatBoostModel, cb_test_params)] if CB_AVAILABLE else [])
            + ([(XGBModel, xgb_test_params)] if XGB_AVAILABLE else [])
        )
    )

Then you have to slightly adapt the code below to use the new parameters and setup the model dynamically.

assert likelihood.type == LikelihoodType.MultiQuantile
assert likelihood.quantiles == [0.1, 0.3, 0.5, 0.7, 0.9]

@pytest.mark.skipif(not XGB_AVAILABLE, reason="XGBoost required for this test")
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the same here as above, let's cover both models in the same test via parametrization

ozink-u8 and others added 11 commits April 2, 2026 13:30
Co-authored-by: Dennis Bader <dennis.bader@gmx.ch>
Co-authored-by: Dennis Bader <dennis.bader@gmx.ch>
Description wrongly stated that one model is fit for all quantiles before

Co-authored-by: Dennis Bader <dennis.bader@gmx.ch>
Co-authored-by: Dennis Bader <dennis.bader@gmx.ch>
Copy link
Copy Markdown
Collaborator

@dennisbader dennisbader left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great now, thanks a lot for the updates @ozink-u8 🚀

I pushed some minor changes, now everything is good to go 💯

@dennisbader dennisbader merged commit e803ef6 into unit8co:master Apr 9, 2026
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature] XGBoost native multiquantile regression

2 participants