-
Notifications
You must be signed in to change notification settings - Fork 135
Adding expand_dims for xtensor #1449
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: labeled_tensors
Are you sure you want to change the base?
Adding expand_dims for xtensor #1449
Conversation
Now that we have this PR based on the right commit, @ricardoV94 it is ready for a first look. One question: my first draft of this was based on a later commit -- this draft goes back to an earlier commit, and it looks like |
That's the new name, it better represents the kind of rewrites it holds |
@ricardoV94 I think this is a step toward handling symbolic sizes, but there are a couple of place where I'm not sure what the right behavior is. See the comments in Do those tests make sense? Are there more cases that should be covered? |
The simplest test for symbolic expand_dims is: size_new_dim = xtensor("size_new_dim", shape=(), dtype=int)
x = xtensor("x", shape=(3,))
y = x.expand_dims(new_dim=size_new_dim)
xr_function = function([x, size_new_dim], y)
x_test = xr_arange_like(x)
size_new_dim_test = DataArray(np.array(5, dtype=int))
result = xr_function(x_test, size_new_dim_test)
expected_result = x_test.expand_dims(new_dim=size_new_dim_test)
xr_assert_allclose(result, expected_result) Yout can parametrize the test to try default and explicit non-default axis as well. Sidenote, what is an implicit |
@ricardoV94 I've addressed most of your comments on the previous round, and made a first pass at adding support for multiple dimensions. Please take a look at the Assuming that adding multiple dimensions is rare, what do with think of the loop option, as opposed to making a single Op that adds multiple dimensions? |
|
||
# Test behavior with symbolic size > 1 | ||
# NOTE: This test documents our current behavior where expand_dims broadcasts to the requested size. | ||
# This differs from xarray's behavior where expand_dims always adds a size-1 dimension. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not true?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure about the general claim in the note, but at least in this case, it seems like we're getting the behavior we want from xtensor, but running the same operation with xarray does something different, causing the test to fail. Here's Cursor's summary
The test failure confirms that our current implementation of expand_dims broadcasts to the requested size (4 in this case), while xarray's behavior is to always add a size-1 dimension. This is evident from the test output, where the left side (our implementation) has a shape of (batch: 4, a: 2, b: 3), and the right side (xarray's behavior) has a shape of (batch: 1, a: 2, b: 3).
I'm inclined to keep this test to note the discrepancy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand the point. The test shows xarray accepts expand_dims({"batch": 4})
and I guess also expand_dims(batch=4)
.
That is clearly at odds with the comment that xarray always expands with size of 1.
And that's exactly the behavior we want to replicate. If the size kwarg is something that doesn't exist in xarray (I suspect it doesn't, how would you map each size to each new dim?) we shouldn't introduce it, we want to mimick their API.
That's fine. We used that for other Ops and we can revisit later of we want it to be fused |
@ricardoV94 This is ready for another look. The rewrite was a shambles, but I think I have a clearer idea now. |
I left some comments above. Rewrite looks good. As we discussed we should redo the tests to use expand_dims as a method (like xarray users would). Also, I suspected xarray allows specifying the size like |
@ricardoV94 I cleaned up the code as suggested and took a first cut at handling the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great. Just the question about the size kwarg and small notes.
# Extract size from dim_kwargs if present | ||
size = dim_kwargs.pop("size", 1) if dim_kwargs else 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a thing in xarray?
self, | ||
dim: str | Sequence[str] | dict[str, int | Sequence] | None = None, | ||
create_index_for_new_dim: bool = True, | ||
axis: int | None = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume?
axis: int | None = None, | |
axis: int | Sequence[int] | None = None, |
@@ -8,10 +8,10 @@ | |||
from itertools import chain, combinations | |||
|
|||
import numpy as np | |||
import pytest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't this needed?
# Symbolic size=1 | ||
size_sym_1 = scalar("size_sym_1", dtype="int64") | ||
y = x.expand_dims("country", size=size_sym_1) | ||
fn = xr_function([x, size_sym_1], y) | ||
xr_assert_allclose(fn(x_test, 1), x_test.expand_dims("country")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't need the special size behavior if it's not a thing in xarray
dims_dict = {} | ||
for name, val in dim.items(): | ||
if isinstance(val, Sequence | np.ndarray) and not isinstance(val, str): | ||
dims_dict[name] = len(val) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe have a warning in this branch, that coordinates are not used by PyTensor, will simply take the length of the sequence?
# Convert to canonical form: list of (dim_name, size) | ||
canonical_dims: list[tuple[str, int | np.integer | TensorVariable]] = [] | ||
for name, size in dims_dict.items(): | ||
canonical_dims.append((name, size)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems useless? Just reverse over reversed(dims_dict.items())
below?
# Convert to canonical form: list of (dim_name, size) | |
canonical_dims: list[tuple[str, int | np.integer | TensorVariable]] = [] | |
for name, size in dims_dict.items(): | |
canonical_dims.append((name, size)) |
# Store original dimensions for later use with axis | ||
original_dims = list(x.type.dims) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't used other than to copy it below, just define it then directly?
# Store original dimensions for later use with axis | |
original_dims = list(x.type.dims) |
for insert_dim, insert_axis in sorted( | ||
zip(new_dim_names, axis), key=lambda x: x[1] | ||
): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe you can use np.insert? You may also want to normalize the axis before you get here if they are negative. There's a helper in npy2_compat.py
If it works fine with negative ignore
Add expand_dims operation for labeled tensors
This PR adds support for the
expand_dims
operation in PyTensor's labeled tensor system, allowing users to add new dimensions to labeled tensors with explicit dimension names.Key Features
ExpandDims
operation that adds a new dimension to an XTensorVariableImplementation Details
The implementation includes:
New
ExpandDims
class inpytensor/xtensor/shape.py
that handles:Rewriting rule in
pytensor/xtensor/rewriting/shape.py
that:Comprehensive test suite in
tests/xtensor/test_shape.py
covering:Usage Example
Testing
The implementation includes extensive tests that verify:
📚 Documentation preview 📚: https://pytensor--1449.org.readthedocs.build/en/1449/