Skip to content

Conversation

Intron7
Copy link
Member

@Intron7 Intron7 commented Sep 11, 2025

This for now is a test with precompiled cuda extensions.

@Intron7 Intron7 added the run-gpu-ci runs GPU CI label Sep 11, 2025
@Intron7 Intron7 marked this pull request as draft September 11, 2025 19:02
@github-actions github-actions bot removed the run-gpu-ci runs GPU CI label Sep 11, 2025
@Intron7 Intron7 added the run-gpu-ci runs GPU CI label Sep 11, 2025
@github-actions github-actions bot removed the run-gpu-ci runs GPU CI label Sep 11, 2025
@Intron7 Intron7 added the run-gpu-ci runs GPU CI label Sep 11, 2025
@github-actions github-actions bot removed the run-gpu-ci runs GPU CI label Sep 11, 2025
@Intron7 Intron7 added the run-gpu-ci runs GPU CI label Sep 12, 2025
@github-actions github-actions bot removed the run-gpu-ci runs GPU CI label Sep 12, 2025
Copy link
Member

@flying-sheep flying-sheep left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh wow, that looks super cool! Addresses all my previous concerns about manual specification of magic numbers for chunking, repetitive kernel wrapper code, and so on!

from rapids_singlecell._cuda import _qc_cuda as _qc

if sparse.isspmatrix_csr(X):
_qc.sparse_qc_csr(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto, for all of these

@Intron7 Intron7 added the run-gpu-ci runs GPU CI label Sep 16, 2025
@github-actions github-actions bot removed the run-gpu-ci runs GPU CI label Sep 16, 2025
Copy link
Member

@flying-sheep flying-sheep left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks awesome!

I didn’t go through the C++ code assuming it’s a straight port.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would be good to check if these two are the same except from the base image. there’s already some drift with the comments and indentation!

@Intron7 Intron7 added the run-gpu-ci runs GPU CI label Sep 16, 2025
@github-actions github-actions bot removed the run-gpu-ci runs GPU CI label Sep 16, 2025
@Intron7 Intron7 added the run-gpu-ci runs GPU CI label Sep 17, 2025
@github-actions github-actions bot removed the run-gpu-ci runs GPU CI label Sep 17, 2025
@Intron7 Intron7 added the run-gpu-ci runs GPU CI label Sep 17, 2025
@github-actions github-actions bot removed the run-gpu-ci runs GPU CI label Sep 17, 2025
@Intron7
Copy link
Member Author

Intron7 commented Sep 18, 2025

Todo:

  • Protect Kernels against NullPointers
  • Figure out if pointers or nb:arrays are the better option for passing arg
  • add keywords
  • Figure out cuda-13 support

Copy link
Member

@flying-sheep flying-sheep left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add positional-only arguments.

also please answer #455 (comment)

Figure out if pointers or nb:arrays are the better option for passing arg

probably arrays, why would you do pointers if nanobind has first-class numba support?

most importantly, I see some int(something.data.ptr), you should definitely never do that.

@flying-sheep flying-sheep added the run-gpu-ci runs GPU CI label Sep 18, 2025
@github-actions github-actions bot removed the run-gpu-ci runs GPU CI label Sep 18, 2025
@Intron7 Intron7 added the run-gpu-ci runs GPU CI label Sep 18, 2025
@github-actions github-actions bot removed the run-gpu-ci runs GPU CI label Sep 18, 2025
Copy link
Member

@flying-sheep flying-sheep left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good!

@Intron7 Intron7 added the run-gpu-ci runs GPU CI label Sep 22, 2025
@github-actions github-actions bot removed the run-gpu-ci runs GPU CI label Sep 22, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants