Skip to content

Commit 2759ae8

Browse files
leofangkkraus14
andauthored
Apply suggestions from code review
Co-authored-by: Keith Kraus <[email protected]>
1 parent 24ec109 commit 2759ae8

File tree

3 files changed

+8
-8
lines changed

3 files changed

+8
-8
lines changed

README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,9 @@ CUDA Python is the home for accessing NVIDIA’s CUDA platform from Python. It c
44

55
* [cuda.core](https://nvidia.github.io/cuda-python/cuda-core/latest): Pythonic access to CUDA Runtime and other core functionalities
66
* [cuda.bindings](https://nvidia.github.io/cuda-python/cuda-bindings/latest): Low-level Python bindings to CUDA C APIs
7-
* [cuda.cooperative](https://nvidia.github.io/cccl/cuda_cooperative/): A Python package providing CUB's reusable block-wide and warp-wide *device* primitives for use within Numba CUDA kernels
8-
* [cuda.parallel](https://nvidia.github.io/cccl/cuda_parallel/): A Python package for easy access to highly efficient and customizable parallel algorithms, like `sort`, `scan`, `reduce`, `transform`, etc, that are callable on the *host*.
9-
* [numba.cuda](https://nvidia.github.io/numba-cuda/): Numba's CUDA target for writing CUDA SIMT kernels in Python.
7+
* [cuda.cooperative](https://nvidia.github.io/cccl/cuda_cooperative/): A Python package providing CCCL's reusable block-wide and warp-wide *device* primitives for use within Numba CUDA kernels
8+
* [cuda.parallel](https://nvidia.github.io/cccl/cuda_parallel/): A Python package for easy access to CCCL's highly efficient and customizable parallel algorithms, like `sort`, `scan`, `reduce`, `transform`, etc, that are callable on the *host*
9+
* [numba.cuda](https://nvidia.github.io/numba-cuda/): Numba's target for CUDA GPU programming by directly compiling a restricted subset of Python code into CUDA kernels and device functions following the CUDA execution model.
1010

1111
For access to NVIDIA CPU & GPU Math Libraries, please refer to [nvmath-python](https://docs.nvidia.com/cuda/nvmath-python/latest).
1212

cuda_bindings/docs/source/install.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -46,11 +46,11 @@ $ conda install -c conda-forge cuda-python
4646
### Requirements
4747

4848
* CUDA Toolkit headers[^1]
49-
* static CUDA runtime[^2]
49+
* CUDA Runtime static library[^2]
5050

5151
[^1]: User projects that `cimport` CUDA symbols in Cython must also use CUDA Toolkit (CTK) types as provided by the `cuda.bindings` major.minor version. This results in CTK headers becoming a transitive dependency of downstream projects through CUDA Python.
5252

53-
[^2]: The static CUDA runtime (`libcudart_static.a` on Linux, `cudart_static.lib` on Windows) is part of CUDA Toolkit. If CUDA is installed from conda, it is contained in the `cuda-cudart-static` package.
53+
[^2]: The CUDA Runtime static library (`libcudart_static.a` on Linux, `cudart_static.lib` on Windows) is part of the CUDA Toolkit. If using conda packages, it is contained in the `cuda-cudart-static` package.
5454

5555
Source builds require that the provided CUDA headers are of the same major.minor version as the `cuda.bindings` you're trying to build. Despite this requirement, note that the minor version compatibility is still maintained. Use the `CUDA_HOME` (or `CUDA_PATH`) environment variable to specify the location of your headers. For example, if your headers are located in `/usr/local/cuda/include`, then you should set `CUDA_HOME` with:
5656

cuda_python/docs/source/index.rst

+3-3
Original file line numberDiff line numberDiff line change
@@ -6,9 +6,9 @@ multiple components:
66

77
- `cuda.core`_: Pythonic access to CUDA runtime and other core functionalities
88
- `cuda.bindings`_: Low-level Python bindings to CUDA C APIs
9-
- `cuda.cooperative`_: A Python package providing CUB's reusable block-wide and warp-wide *device* primitives for use within Numba CUDA kernels
10-
- `cuda.parallel`_: A Python package for easy access to highly efficient and customizable parallel algorithms, like ``sort``, ``scan``, ``reduce``, ``transform``, etc, that are callable on the *host*
11-
- `numba.cuda`_: Numba's CUDA target for writing CUDA SIMT kernels in Python
9+
- `cuda.cooperative`_: A Python package providing CCCL's reusable block-wide and warp-wide *device* primitives for use within Numba CUDA kernels
10+
- `cuda.parallel`_: A Python package for easy access to CCCL's highly efficient and customizable parallel algorithms, like ``sort``, ``scan``, ``reduce``, ``transform``, etc, that are callable on the *host*
11+
- `numba.cuda`_: Numba's target for CUDA GPU programming by directly compiling a restricted subset of Python code into CUDA kernels and device functions following the CUDA execution model.
1212

1313
For access to NVIDIA CPU & GPU Math Libraries, please refer to `nvmath-python`_.
1414

0 commit comments

Comments
 (0)