You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The move_to_device function was previously only expecting torch.Tensor objects.
This caused an AttributeError when a torch.nn.Module (like a Linear layer)
was passed to it, as modules do not have a `.device` attribute directly
in the same way tensors do.
This commit modifies move_to_device to:
1. Check if the input is an instance of nn.Module.
2. If so, use the module's own `.to(device)` method for device placement.
3. Updates type hints for the function to reflect it can handle
Union[torch.Tensor, torch.nn.Module].
Additionally, a new unit test has been added to test_api.py to verify
that move_to_device correctly handles both torch.Tensor and torch.nn.Module
objects, moving them to the target device (CPU or CUDA) as expected.
This ensures the fix is effective and prevents regressions.
0 commit comments