Skip to content

Add support for einsum operation to pytorch parser (requires 1116) #1273

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 15 commits into
base: main
Choose a base branch
from

Conversation

JanFSchulte
Copy link
Contributor

Builds on Chang's keras v3 PR #1116 and exposes the new einsum implementation through the pytorch parser. pytorch doesn't have an an equivalent to EinsumDense but allowing the use of einsum operations in some custom model would still be useful.

Type of change

  • New feature (non-breaking change which adds functionality)

Tests

Added 2 use cases (outer product and batch matrix multiplication to the pytests, works without issues.

Checklist

  • I have read the guidelines for contributing.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have made corresponding changes to the documentation.
  • My changes generate no new warnings.
  • I have installed and run pre-commit on the files I edited or added.
  • I have added tests that prove my fix is effective or that my feature works.

@JanFSchulte JanFSchulte added the please test Trigger testing by creating local PR branch label Apr 15, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
please test Trigger testing by creating local PR branch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants