Skip to content

Centralize cuda_arch capability definition#202

Open
Copilot wants to merge 7 commits intomasterfrom
copilot/sub-pr-199
Open

Centralize cuda_arch capability definition#202
Copilot wants to merge 7 commits intomasterfrom
copilot/sub-pr-199

Conversation

Copy link
Contributor

Copilot AI commented Mar 12, 2026

Wait for:

cuda_arch=75 was duplicated across 5 package specs in cuda/spack.yaml and tf/spack.yaml, making it hard to retarget GPU architecture.

Changes

  • spack-environment/packages.yaml: Added cuda_arch=75 as a conditional requirement in packages: all: require with when: '+cuda', so it applies to all CUDA-enabled packages without affecting CPU-only packages.
  • cuda/spack.yaml and tf/spack.yaml: Drop inline cuda_arch=75 from acts, arrow, celeritas, py-torch, and py-tensorflow specs.
# spack-environment/packages.yaml
packages:
  all:
    require:
    - when: '+cuda'
      any_of: [cuda_arch=75, '@:']

To target a different GPU architecture (currently Compute Capability 7.5 / Turing: T4, RTX 2xxx, Quadro RTX), change cuda_arch=75 in this single location.


💬 We'd love your input! Share your thoughts on Copilot coding agent in our 2 minute survey.


📍 Connect Copilot coding agent with Jira, Azure Boards or Linear to delegate work to Copilot in one click without leaving your project management tool.

Co-authored-by: wdconinc <4656391+wdconinc@users.noreply.github.com>
Copilot AI changed the title [WIP] [WIP] Address feedback on centralizing cuda_arch capability definition Centralize cuda_arch capability definition Mar 12, 2026
Copy link
Contributor

@wdconinc wdconinc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks sensible. Maybe there's a risk this will enable cuda_arch and therefore +cuda for packages where we currently don't explicitly enable +cuda, but would that be a bad thing?

Base automatically changed from pr/arrow_cuda to master March 12, 2026 23:12
@wdconinc wdconinc marked this pull request as ready for review March 12, 2026 23:13
Copilot AI review requested due to automatic review settings March 12, 2026 23:13
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR centralizes the CUDA compute capability setting for the Spack CUDA-related environments by introducing a shared cuda_arch.yaml include and removing per-spec cuda_arch=75 pins.

Changes:

  • Add a new spack-environment/cuda_arch.yaml and include it from CUDA/TensorFlow environments.
  • Remove explicit cuda_arch=75 from CUDA-enabled specs in spack-environment/cuda/spack.yaml.
  • Remove explicit cuda_arch=75 from the TensorFlow CUDA spec in spack-environment/tf/spack.yaml.

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 1 comment.

File Description
spack-environment/tf/spack.yaml Includes centralized CUDA arch config; drops per-spec cuda_arch=75 on TensorFlow.
spack-environment/cuda/spack.yaml Includes centralized CUDA arch config; drops per-spec cuda_arch=75 on multiple CUDA specs.
spack-environment/cuda_arch.yaml New shared config intended to define CUDA architecture in one place.

You can also share your feedback on Copilot code review. Take the survey.

wdconinc and others added 2 commits March 13, 2026 14:58
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: wdconinc <4656391+wdconinc@users.noreply.github.com>
Copilot AI requested a review from wdconinc March 13, 2026 20:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants