-
Notifications
You must be signed in to change notification settings - Fork 4.2k
Add a standalone tutorial for integrating custom op using sycl for Intel GPU #3470
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/tutorials/3470
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New FailureAs of commit 1804342 with merge base b78fc75 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@pytorchbot label "2.8" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one comment, otherwise lgtm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Content sounds good to me. Will let @svekars do the final review.
If you need to compile **SYCL** code (for example, ``.sycl`` files), use `torch.utils.cpp_extension.SyclExtension <https://docs.pytorch.org/docs/stable/cpp_extension.html#torch.utils.cpp_extension.SyclExtension>`_. | ||
The setup process is very similar to C++/CUDA, except the compilation arguments need to be adjusted for SYCL. | ||
|
||
Using ``sycl_extension`` is as simple as writing the following ``setup.py``: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using ``sycl_extension`` is as simple as writing the following ``setup.py``: | |
Using ``sycl_extension`` is as straightforward as writing the following ``setup.py``: |
This is just a personal preference, but I try to avoid using words like "simple," "easy," etc, in docs - just to make sure it's inclusive and folks don't feel frustrated if something doesn't work..
|
||
Defining the custom op and adding backend implementations | ||
--------------------------------------------------------- | ||
First, let's write a Sycl function that computes ``mymuladd``: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First, let's write a Sycl function that computes ``mymuladd``: | |
First, let's write a SYCL function that computes ``mymuladd``: |
let's be consistent in capitalization
} | ||
|
||
// ================================================== | ||
// Register Sycl Implementations to Torch Library |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// Register Sycl Implementations to Torch Library | |
// Register SYCL Implementations to Torch Library |
"ops", | ||
] | ||
|
||
Testing sycl extension operator |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Testing sycl extension operator | |
Testing SYCL extension operator |
add a standalone tutorial for integrating custom op using sycl for Intel GPU