Skip to content

Adding expand_dims for xtensor #1449

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 8 commits into
base: labeled_tensors
Choose a base branch
from

Conversation

AllenDowney
Copy link

@AllenDowney AllenDowney commented Jun 6, 2025

Add expand_dims operation for labeled tensors

This PR adds support for the expand_dims operation in PyTensor's labeled tensor system, allowing users to add new dimensions to labeled tensors with explicit dimension names.

Key Features

  • New ExpandDims operation that adds a new dimension to an XTensorVariable
  • Support for both static and symbolic dimension sizes
  • Automatic broadcasting when size > 1
  • Integration with existing tensor operations
  • Full compatibility with xarray's expand_dims behavior

Implementation Details

The implementation includes:

  1. New ExpandDims class in pytensor/xtensor/shape.py that handles:

    • Adding new dimensions with specified names
    • Support for both static and symbolic sizes
    • Shape inference and validation
  2. Rewriting rule in pytensor/xtensor/rewriting/shape.py that:

    • Converts labeled tensor operations to standard tensor operations
    • Handles broadcasting when needed
    • Validates symbolic sizes
  3. Comprehensive test suite in tests/xtensor/test_shape.py covering:

    • Basic dimension expansion
    • Static and symbolic sizes
    • Error cases and edge cases
    • Compatibility with xarray operations
    • Integration with other labeled tensor operations

Usage Example

import pytensor.tensor as pt
from pytensor.xtensor import xtensor

# Create a labeled tensor
x = xtensor("x", dims=("city",), shape=(3,))

# Add a new dimension
y = expand_dims(x, "country")  # Adds a new dimension of size 1
z = expand_dims(x, "country", size=4)  # Adds a new dimension of size 4

Testing

The implementation includes extensive tests that verify:

  • Correct behavior with various input shapes
  • Proper handling of symbolic sizes
  • Error cases (invalid dimensions, sizes, etc.)
  • Compatibility with xarray's expand_dims
  • Integration with other labeled tensor operations

📚 Documentation preview 📚: https://pytensor--1449.org.readthedocs.build/en/1449/

@AllenDowney
Copy link
Author

Now that we have this PR based on the right commit, @ricardoV94 it is ready for a first look.

One question: my first draft of this was based on a later commit -- this draft goes back to an earlier commit, and it looks like @register_xcanonicalize doesn't exist yet, so I've replaced it with @register_lower_xtensor, which seems to be its predecessor. Is that the right thing to do for now?

@ricardoV94
Copy link
Member

That's the new name, it better represents the kind of rewrites it holds

@AllenDowney
Copy link
Author

@ricardoV94 I think this is a step toward handling symbolic sizes, but there are a couple of place where I'm not sure what the right behavior is. See the comments in test_shape.py, test_expand_dims_implicit.

Do those tests make sense? Are there more cases that should be covered?

@ricardoV94
Copy link
Member

The simplest test for symbolic expand_dims is:

size_new_dim = xtensor("size_new_dim", shape=(), dtype=int)
x = xtensor("x", shape=(3,))
y =  x.expand_dims(new_dim=size_new_dim)
xr_function = function([x, size_new_dim], y)

x_test = xr_arange_like(x)
size_new_dim_test = DataArray(np.array(5, dtype=int))
result = xr_function(x_test, size_new_dim_test)
expected_result = x_test.expand_dims(new_dim=size_new_dim_test)
xr_assert_allclose(result, expected_result)

Yout can parametrize the test to try default and explicit non-default axis as well.

Sidenote, what is an implicit expand_dims? I don't think that's a thing.

@AllenDowney
Copy link
Author

@ricardoV94 I've addressed most of your comments on the previous round, and made a first pass at adding support for multiple dimensions. Please take a look at the expand_dims wrapper function, which canonicalizes the inputs and loops through them to make a series of Ops.

Assuming that adding multiple dimensions is rare, what do with think of the loop option, as opposed to making a single Op that adds multiple dimensions?


# Test behavior with symbolic size > 1
# NOTE: This test documents our current behavior where expand_dims broadcasts to the requested size.
# This differs from xarray's behavior where expand_dims always adds a size-1 dimension.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not true?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure about the general claim in the note, but at least in this case, it seems like we're getting the behavior we want from xtensor, but running the same operation with xarray does something different, causing the test to fail. Here's Cursor's summary

The test failure confirms that our current implementation of expand_dims broadcasts to the requested size (4 in this case), while xarray's behavior is to always add a size-1 dimension. This is evident from the test output, where the left side (our implementation) has a shape of (batch: 4, a: 2, b: 3), and the right side (xarray's behavior) has a shape of (batch: 1, a: 2, b: 3).

I'm inclined to keep this test to note the discrepancy.

Copy link
Member

@ricardoV94 ricardoV94 Jun 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand the point. The test shows xarray accepts expand_dims({"batch": 4}) and I guess also expand_dims(batch=4).

That is clearly at odds with the comment that xarray always expands with size of 1.

And that's exactly the behavior we want to replicate. If the size kwarg is something that doesn't exist in xarray (I suspect it doesn't, how would you map each size to each new dim?) we shouldn't introduce it, we want to mimick their API.

@ricardoV94
Copy link
Member

Assuming that adding multiple dimensions is rare, what do with think of the loop option, as opposed to making a single Op that adds multiple dimensions?

That's fine. We used that for other Ops and we can revisit later of we want it to be fused

@AllenDowney
Copy link
Author

@ricardoV94 This is ready for another look.

The rewrite was a shambles, but I think I have a clearer idea now.

@ricardoV94
Copy link
Member

ricardoV94 commented Jun 10, 2025

I left some comments above.

Rewrite looks good. As we discussed we should redo the tests to use expand_dims as a method (like xarray users would).

Also, I suspected xarray allows specifying the size like x.expand_dims(dim_a=1, dim_b=2) which is equivalent to x.expand_dims({"dim_a":1, "dim_b":2}). At least that was a pattern I noticed in other xarray methods. I saw you had a test for multiple dims with dict, but I didn't see one with kwargs.

@AllenDowney
Copy link
Author

@ricardoV94 I cleaned up the code as suggested and took a first cut at handling the axis parameter. Please take a look.

Copy link
Member

@ricardoV94 ricardoV94 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks great. Just the question about the size kwarg and small notes.

Comment on lines +433 to +434
# Extract size from dim_kwargs if present
size = dim_kwargs.pop("size", 1) if dim_kwargs else 1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this a thing in xarray?

self,
dim: str | Sequence[str] | dict[str, int | Sequence] | None = None,
create_index_for_new_dim: bool = True,
axis: int | None = None,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume?

Suggested change
axis: int | None = None,
axis: int | Sequence[int] | None = None,

@@ -8,10 +8,10 @@
from itertools import chain, combinations

import numpy as np
import pytest
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this needed?

Comment on lines +408 to +412
# Symbolic size=1
size_sym_1 = scalar("size_sym_1", dtype="int64")
y = x.expand_dims("country", size=size_sym_1)
fn = xr_function([x, size_sym_1], y)
xr_assert_allclose(fn(x_test, 1), x_test.expand_dims("country"))
Copy link
Member

@ricardoV94 ricardoV94 Jun 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't need the special size behavior if it's not a thing in xarray

dims_dict = {}
for name, val in dim.items():
if isinstance(val, Sequence | np.ndarray) and not isinstance(val, str):
dims_dict[name] = len(val)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe have a warning in this branch, that coordinates are not used by PyTensor, will simply take the length of the sequence?

Comment on lines +462 to +465
# Convert to canonical form: list of (dim_name, size)
canonical_dims: list[tuple[str, int | np.integer | TensorVariable]] = []
for name, size in dims_dict.items():
canonical_dims.append((name, size))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems useless? Just reverse over reversed(dims_dict.items()) below?

Suggested change
# Convert to canonical form: list of (dim_name, size)
canonical_dims: list[tuple[str, int | np.integer | TensorVariable]] = []
for name, size in dims_dict.items():
canonical_dims.append((name, size))

Comment on lines +467 to +468
# Store original dimensions for later use with axis
original_dims = list(x.type.dims)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't used other than to copy it below, just define it then directly?

Suggested change
# Store original dimensions for later use with axis
original_dims = list(x.type.dims)

Comment on lines +489 to +491
for insert_dim, insert_axis in sorted(
zip(new_dim_names, axis), key=lambda x: x[1]
):
Copy link
Member

@ricardoV94 ricardoV94 Jun 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe you can use np.insert? You may also want to normalize the axis before you get here if they are negative. There's a helper in npy2_compat.py

If it works fine with negative ignore

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants