Skip to content

Conversation

@rwtarpit
Copy link
Contributor

@rwtarpit rwtarpit commented Dec 1, 2025

Reference Issues/PRs

Fixes #2910

What does this implement/fix? Explain your changes.

This PR implements the Mixin class for series_to_series forecasters by introducing a abstract method _series_to_series_forecast.
A dummy series forecaster is also implemented in _dummy_series_forecaster.py for testing purpose.

Does your contribution introduce a new dependency? If yes, which one?

Any other comments?

PR checklist

For all contributions
  • I've added myself to the list of contributors. Alternatively, you can use the @all-contributors bot to do this for you after the PR has been merged.
  • The PR title starts with either [ENH], [MNT], [DOC], [BUG], [REF], [DEP] or [GOV] indicating whether the PR topic is related to enhancement, maintenance, documentation, bugs, refactoring, deprecation or governance.
For new estimators and functions
  • I've added the estimator/function to the online API documentation.
  • (OPTIONAL) I've added myself as a __maintainer__ at the top of relevant files and want to be contacted regarding its maintenance. Unmaintained files may be removed. This is for the full file, and you should not add yourself if you are just making minor changes or do not want to help maintain its contents.
For developers with write access
  • (OPTIONAL) I've updated aeon's CODEOWNERS to receive notifications about future changes to these files.

@aeon-actions-bot aeon-actions-bot bot added enhancement New feature, improvement request or other non-bug code enhancement forecasting Forecasting package labels Dec 1, 2025
@aeon-actions-bot
Copy link
Contributor

Thank you for contributing to aeon

I have added the following labels to this PR based on the title: [ enhancement ].
I have added the following labels to this PR based on the changes made: [ forecasting ]. Feel free to change these if they do not properly represent the PR.

The Checks tab will show the status of our automated tests. You can click on individual test runs in the tab or "Details" in the panel below to see more information if there is a failure.

If our pre-commit code quality check fails, any trivial fixes will automatically be pushed to your PR unless it is a draft.

Don't hesitate to ask questions on the aeon Slack channel if you have any.

PR CI actions

These checkboxes will add labels to enable/disable CI functionality for this PR. This may not take effect immediately, and a new commit may be required to run the new configuration.

  • Run pre-commit checks for all files
  • Run mypy typecheck tests
  • Run all pytest tests and configurations
  • Run all notebook example tests
  • Run numba-disabled codecov tests
  • Stop automatic pre-commit fixes (always disabled for drafts)
  • Disable numba cache loading
  • Regenerate expected results for testing
  • Push an empty commit to re-run CI checks

@rwtarpit
Copy link
Contributor Author

rwtarpit commented Dec 1, 2025

@hadifawaz1999 this PR is regarding the unimplemented SeriesToSeriesForecastingMixin we talked about on slack.
pls suggest any changes

Copy link
Member

@hadifawaz1999 hadifawaz1999 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for taking care of this. Few comments before going into details:

  • Dummy forecaster should be under forecasring/deep_learning and should inherit base deep forecaster instead of base forecaster

@hadifawaz1999 hadifawaz1999 added the deep learning Deep learning related label Dec 1, 2025
@rwtarpit
Copy link
Contributor Author

rwtarpit commented Dec 1, 2025

@hadifawaz1999 the last few tests that are failing are in python 3.13 enviroment specifically, due to tensorflow compatibility issues. Tensorflow is being imported in BaseDeepForecaster.
should i skip tests with tensorflow dependency?

@hadifawaz1999
Copy link
Member

@hadifawaz1999 the last few tests that are failing are in python 3.13 enviroment specifically, due to tensorflow compatibility issues. Tensorflow is being imported in BaseDeepForecaster. should i skip tests with tensorflow dependency?

Hello, for all tests you should add this decorator:

@pytest.mark.skipif(
    not _check_soft_dependencies("tensorflow", severity="none"),
    reason="skip test if required soft dependency not available",
)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

deep learning Deep learning related enhancement New feature, improvement request or other non-bug code enhancement forecasting Forecasting package

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[ENH] Forecasting series to series models

2 participants