Skip to content

fix: validate unique model names for models without alias#1079

Open
HummerQAQ wants to merge 3 commits intoNixtla:mainfrom
HummerQAQ:fix/model-name-validation
Open

fix: validate unique model names for models without alias#1079
HummerQAQ wants to merge 3 commits intoNixtla:mainfrom
HummerQAQ:fix/model-name-validation

Conversation

@HummerQAQ
Copy link

Issue

This PR fixes a gap in model name validation when using custom models without an explicit alias.

While working on a research project, I defined two simple custom forecasting models (modelA and modelB) that do not expose an alias attribute. During experimentation, I accidentally passed duplicated models:

models = [modelA(), modelA()]

However, StatsForecast did not raise an error and instead silently merged both outputs into a single column in the forecast DataFrame. This made it appear as if one model was missing, leading to unnecessary debugging time before identifying the root cause.

What changed

  1. Model name validation now:
  • Uses alias when available
  • Falls back to repr(model) when alias is not defined
  • Enforces uniqueness across all models during initialization
  1. A clear ValueError is raised during initialization if duplicate model names are detected
  2. Added regression tests to ensure duplicate names are rejected even when models do not expose alias

Why this matters

While most users rely on built-in models, StatsForecast’s API allows users to pass arbitrary custom models. In such advanced or experimental scenarios, silent configuration errors might be costly to debug.

@CLAassistant
Copy link

CLAassistant commented Jan 21, 2026

CLA assistant check
All committers have signed the CLA.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR fixes a validation gap in model name uniqueness checking when using custom models without an explicit alias attribute. Previously, the validation logic had a bug where it used lambda: None as a default value instead of None, and only validated models with an alias attribute. This meant duplicate custom models without aliases would silently merge their outputs into a single column.

Changes:

  • Fixed _validate_model_names() to use repr(model) as fallback when alias is not defined, ensuring all models are validated
  • Corrected the bug in the old code that used lambda: None instead of None as the default value
  • Added test coverage with _NoAliasModel to verify duplicate detection for models without alias

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated no comments.

File Description
python/statsforecast/core.py Fixed model name validation to use repr(model) fallback and check uniqueness across all models, not just those with alias
tests/test_core.py Added _NoAliasModel test helper and test case to verify duplicate models without alias are properly rejected

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants