-
Notifications
You must be signed in to change notification settings - Fork 181
Unable do Compile Rust-CUDA because missing CUDNN includes. #204
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi @jmhal, You can ignore those build failures if you won't be using either optix of cudnn. Same goes for trying to run the optix examples. cudnn-sys is new, and it seems we didn't define an environment variable to allow you to configure the location of the cudnn sdk/headers. Seems on linux, we expect to find the cudnn headers in either root@088ab2700488:/workspaces/rust-cuda# ls -la /usr/include/cudnn*
lrwxrwxrwx 1 root root 26 Feb 28 05:53 /usr/include/cudnn.h -> /etc/alternatives/libcudnn
lrwxrwxrwx 1 root root 29 Feb 28 05:53 /usr/include/cudnn_adv.h -> /etc/alternatives/cudnn_adv_h
lrwxrwxrwx 1 root root 33 Feb 28 05:53 /usr/include/cudnn_backend.h -> /etc/alternatives/cudnn_backend_h
lrwxrwxrwx 1 root root 29 Feb 28 05:53 /usr/include/cudnn_cnn.h -> /etc/alternatives/cudnn_cnn_h
lrwxrwxrwx 1 root root 31 Feb 28 05:53 /usr/include/cudnn_graph.h -> /etc/alternatives/cudnn_graph_h
lrwxrwxrwx 1 root root 29 Feb 28 05:53 /usr/include/cudnn_ops.h -> /etc/alternatives/cudnn_ops_h
lrwxrwxrwx 1 root root 33 Feb 28 05:53 /usr/include/cudnn_version.h -> /etc/alternatives/cudnn_version_h You can try to do the same in your environment until we add an env var to configure this. You can also try using our containers by either building one with the included docker files in the |
Hi @jorge-ortega, Thank you for your quick reply. Creating links on /usr/include made the
Names mismatches. But since I don't believe we need optix for the momento, I will ignore them as you advised. Instead I tried to compile one of the examples:
It complains about LLVM, but I have llvm installed and even set the variable with I'll try the containers now. Best Regards. |
Odd. Do you have |
The optix names mismatch might have to do with the version of Optix you have installed. I believe we currently support 7.3(?) and it will likely fail to compile with newer versions. Also note that we require the very specific nightly rustc specified in our rust-toolchain.toml. If you've cloned our repo, you should be good to go, but if you're using our crates from a different project root, you'll need to use the same nightly version as we do. Easiest way to do that is to copy toolchain toml to your project root. |
Yeah, we don't support newer OptiX versions yet. |
Hello everyone,
I am trying to compile Rust-Cuda on Ubuntu 24.04 with Cuda Toolkit 12.8 and driver 550.120. Cargo version is cargo 1.86.0 (adf9b6ad1 2025-02-28). I have just cloned the repository from the main branch.
First, I had problems regarding the Optix SDK.
cargo build
returned that it could not find the SDK. After digging a bit, I found this line on the optix-sys crate:So I set OPTIX_ROOT to directory where I placed the SDK, and it worked. But now I got another problem:
So I download the cuDNN (cudnn-linux-x86_64-9.8.0.87_cuda12-archive). After expanding the file, I copied the contents of the include and lib directories into /usr/local/cuda/include and /usr/local/cuda/lib64.
cargo build
gave me the same error. Then I tried setting the variables CUDNN_INCLUDE_DIR and CUDNN_LIBRARY to the include and lib directories, but that got me nowhere. The question is: how should I point cargo do find the CUDNN includes? Looking at the code for the cudnn_sys crate gave me no directions.Best Regards.
The text was updated successfully, but these errors were encountered: