Skip to content

fix: Add helpful error message if prepare_for_*() not called#1333

Open
Aniketsy wants to merge 9 commits intoNVIDIA-NeMo:mainfrom
Aniketsy:fix-helpful-error-message-if-not-prepared
Open

fix: Add helpful error message if prepare_for_*() not called#1333
Aniketsy wants to merge 9 commits intoNVIDIA-NeMo:mainfrom
Aniketsy:fix-helpful-error-message-if-not-prepared

Conversation

@Aniketsy
Copy link

@Aniketsy Aniketsy commented Oct 10, 2025

#1141
Adds checks to train, get_logprobs, and generate in MegatronPolicyWorker to raise a clear error if the model is not prepared for GPU execution.

Please let me know if my approach or fix needs any improvements . I’m open to feedback and happy to make changes based on suggestions.
Thankyou !

Summary by CodeRabbit

  • New Features

    • Adds a readiness check requiring explicit preparation before training or inference, improving safety and clarity.
    • Provides consistent, user-friendly error messages if operations are attempted before preparation.
  • Bug Fixes

    • Prevents accidental GPU execution before the model is properly prepared, reducing crashes and undefined behavior during training and generation.

@Aniketsy Aniketsy requested a review from a team as a code owner October 10, 2025 06:26
@Aniketsy Aniketsy changed the title Add explicit error message if prepare_for_training or prepare_for_lp_inference not called Add helpful error message if prepare_for_*() not called Oct 10, 2025
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 10, 2025

📝 Walkthrough

Walkthrough

Adds a per-instance readiness flag to MegatronPolicyWorker. Methods train, get_logprobs, and generate now raise RuntimeError if called before preparation. The flag is set during prepare_for_training and prepare_for_lp_inference. This enforces an explicit preparation step before GPU-bound execution.

Changes

Cohort / File(s) Summary
Readiness gating for GPU execution
nemo_rl/models/policy/megatron_policy_worker.py
Introduced is_prepared flag (default False). Added guards in train, get_logprobs, and generate to raise RuntimeError if not prepared. Set is_prepared = True in prepare_for_training and prepare_for_lp_inference. Centralized error message for unprepared invocation.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant C as Caller
  participant W as MegatronPolicyWorker

  Note over W: is_prepared defaults to False

  C->>W: prepare_for_training() / prepare_for_lp_inference()
  activate W
  W->>W: set is_prepared = True
  deactivate W

  alt Prepared
    C->>W: train()/get_logprobs()/generate()
    W-->>C: proceed with GPU-bound execution
  else Not prepared
    C->>W: train()/get_logprobs()/generate()
    W-->>C: RuntimeError("Model must be prepared before execution")
  end

  Note over W: Guards enforce explicit preparation before use
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 42.86% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Test Results For Major Changes ✅ Passed The PR adds readiness guards and a boolean flag to MegatronPolicyWorker to raise clear errors if training/inference methods are called before preparation. This is a behavioral safety check that does not alter numerics, convergence, or steady-state performance when properly prepared, and only adds early-exit errors otherwise. The PR description does not include test results, but given the scope, this qualifies as a minor change under the check’s criteria. Therefore, test results are not required for this PR to pass the check.
Title Check ✅ Passed The title clearly and concisely captures the primary change—adding an explicit error message when prepare_for_*() is not called—using straightforward language that aligns with the PR’s objective of enforcing preparation steps before GPU execution.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7c574d0 and a5c9080.

📒 Files selected for processing (1)
  • nemo_rl/models/policy/megatron_policy_worker.py (5 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Follow the Google Python Style Guide for all Python code
Target Python 3.12+ for all Python code in NeMo-RL
Indent Python code with 4 spaces; do not use tabs
Python filenames should be snake_case (e.g., some_file.py)
Class names should be PascalCase
Function and method names should be snake_case
Local variable names should be snake_case; if starting with a number, prefix with k (e.g., k_99th_percentile)
Global variables should be UPPER_SNAKE_CASE and prefixed with G_ (e.g., G_MY_GLOBAL)
Constants should be UPPER_SNAKE_CASE
Avoid shadowing variables declared in an outer scope
Initialize all externally visible members of a class in the constructor
For public interfaces used outside a file, prefer docstrings over comments
Use comments mainly for code within a function or interfaces local to a file
Commented-out code must include a nearby comment explaining usage and why it is commented out; otherwise remove before merging
Use Google-style docstrings for classes and functions (Sphinx-parseable)
Avoid using reflection when functionality can be easily achieved without it
Limit except clauses to the smallest specific set of exceptions possible
For duck-typing via try/except, keep the try body minimal and use else for main logic
Add the NVIDIA copyright header (with current year) at the top of all Python files, excluding tests/ and test-only scripts

Files:

  • nemo_rl/models/policy/megatron_policy_worker.py
nemo_rl/**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

nemo_rl/**/*.py: Do not set non-None configuration defaults in code; YAML is the single source of truth for defaults
Access required config attributes directly (e.g., policy_cfg["precision"]) and assume presence; do not introduce hidden defaults
Express configuration optionality via TypedDict using typing.NotRequired
When adding a new config key to a TypedDict subclass, document the key’s purpose, valid values/types, and recommended default in code
For any class or function decorated with @ray.remote, add '# pragma: no cover' on the class/def line (and on remote functions)

Files:

  • nemo_rl/models/policy/megatron_policy_worker.py
🧬 Code graph analysis (1)
nemo_rl/models/policy/megatron_policy_worker.py (4)
nemo_rl/models/policy/interfaces.py (2)
  • offload_before_refit (149-150)
  • prepare_for_training (125-126)
nemo_rl/models/policy/lm_policy.py (2)
  • offload_before_refit (735-738)
  • prepare_for_training (633-636)
nemo_rl/models/policy/dtensor_policy_worker.py (2)
  • offload_before_refit (1856-1866)
  • prepare_for_training (1831-1852)
nemo_rl/models/policy/dtensor_policy_worker_v2.py (2)
  • offload_before_refit (1817-1827)
  • prepare_for_training (1792-1813)
🪛 Ruff (0.13.3)
nemo_rl/models/policy/megatron_policy_worker.py

887-890: Avoid specifying long messages outside the exception class

(TRY003)


1156-1159: Avoid specifying long messages outside the exception class

(TRY003)


1451-1454: Avoid specifying long messages outside the exception class

(TRY003)


1787-1787: Unused method argument: args

(ARG002)


1787-1787: Unused method argument: kwargs

(ARG002)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Post automodel integration comment / Comment on PR
  • GitHub Check: Post submodule check comment / Comment on PR
🔇 Additional comments (2)
nemo_rl/models/policy/megatron_policy_worker.py (2)

456-456: LGTM! Clean initialization of the readiness flag.

The is_prepared flag is appropriately initialized in __init__ and serves its purpose well.


1782-1782: LGTM! Flag is set at the appropriate locations.

The is_prepared flag is correctly set to True in both preparation methods. This ensures the guard checks will pass after proper initialization.

Note: The flag remains True even after offload_after_refit moves the model to CPU. Verify this is intentional - the flag seems to track whether initial preparation was done (not current GPU state). If the model is used after being offloaded, device mismatch errors will occur naturally.

Also applies to: 1789-1789

@Aniketsy Aniketsy changed the title Add helpful error message if prepare_for_*() not called fix: Add helpful error message if prepare_for_*() not called Oct 10, 2025
Copy link
Contributor

@terrykong terrykong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hi @Aniketsy . thanks for the contribution. this approach looks reasonable to me. would you mind adding this in the other policy_workers so we have parity?

Also a unit test would be appreciated

@Aniketsy Aniketsy force-pushed the fix-helpful-error-message-if-not-prepared branch from 47b0f07 to 95784e1 Compare October 16, 2025 10:42
@Aniketsy Aniketsy requested review from a team as code owners October 16, 2025 10:42
@Aniketsy
Copy link
Author

@terrykong I've updated the changes as per your suggestions, please let me know if this needs improvement.

@terrykong
Copy link
Contributor

Hi @Aniketsy . So right now it looks like once you call prepare once the flag is never set back to false, so you can still fall into the situation where it fails and you don't get this nice message because the parameters were offloaded.

Do you see any way to do this without introducing a complex state machine?

@Aniketsy
Copy link
Author

@terrykong Sorry, I missed the notification. I’ll look into this again. I’ll also explore if there’s a way to address this without introducing a complex state machine.

@chtruong814 chtruong814 added the needs-follow-up Issue needs follow-up label Jan 11, 2026
@Aniketsy Aniketsy force-pushed the fix-helpful-error-message-if-not-prepared branch from 9d4a85d to 4adb2fe Compare January 27, 2026 04:33
@Aniketsy
Copy link
Author

Could you please approve the CI to run?

@chtruong814 chtruong814 added needs-follow-up Issue needs follow-up and removed needs-follow-up Issue needs follow-up labels Jan 27, 2026
apply_torch_aten_alias_tensor_patch()

"""Initialize the DTensorPolicyWorker."""
self.is_prepared = False
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you do the same modifications to dtensor_policy_worker_v2.py too?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the review!, I'll update

mbs: Optional[int] = None,
) -> dict[str, Any]:
"""Train the policy on a batch of data with a given loss function."""
if not self.is_prepared:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On concern is that this variable self.is_prepared can't differentiate between the state of "being prepared for train" and the state of "being prepared for logprob".

Would it be a better idea to make this variable an enumerate of states, there will be 3 states:

  • prepared for training
  • prepared for logprob but not trainingg
  • can't run logprob or training

It will be initialized to 0, as we do offloading/onloading, the state will transit.

@guyueh1
Copy link
Contributor

guyueh1 commented Feb 4, 2026

@Aniketsy thanks for contributing this feature! Can you take a look at the comments.

@github-actions
Copy link

github-actions bot commented Feb 4, 2026

⚠️ File Consistency Check

Check based on commit: ad853b7 (PR #1333 from fix-helpful-error-message-if-not-prepared)

⚠️ DTensor Policy Worker Synchronization Warning

The file nemo_rl/models/policy/workers/dtensor_policy_worker.py was modified in this PR, but nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py was not updated.

Why this matters:
These files contain related DTensor policy worker implementations that should be kept synchronized to ensure consistency across different versions.

Action required:

  • Please review if the changes in nemo_rl/models/policy/workers/dtensor_policy_worker.py should also be applied to nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py
  • Update nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py if necessary to maintain consistency
  • If the files are intentionally different, please add a comment in the PR explaining why

Files to check:

  • Modified: nemo_rl/models/policy/workers/dtensor_policy_worker.py
  • Not modified: nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py

This check ensures that related file implementations remain synchronized across the codebase. If you believe this warning is incorrect or the files should intentionally differ, please add a comment explaining the reasoning.

@chtruong814 chtruong814 removed the needs-follow-up Issue needs follow-up label Feb 4, 2026
@Aniketsy
Copy link
Author

Aniketsy commented Feb 6, 2026

@guyueh1 should we keep PreparedState as a shared enum in a new file and import where we need or should I define in class inside that file without creating new file .

@chtruong814 chtruong814 added the needs-follow-up Issue needs follow-up label Feb 8, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants

Comments