Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Doc] RayServe Single-Host TPU v6e Example with vLLM #47814

Open
wants to merge 16 commits into
base: master
Choose a base branch
from

Conversation

ryanaoleary
Copy link
Contributor

@ryanaoleary ryanaoleary commented Sep 25, 2024

Why are these changes needed?

This PR adds a new guide to the Ray docs that details how to serve an LLM with vLLM and single-host TPUs on GKE. This PR has been tested by running through the steps in the proposed guide and verifying correct output. This PR uses sample code from https://github.com/GoogleCloudPlatform/kubernetes-engine-samples/tree/main/ai-ml/gke-ray/rayserve/llm/tpu.

Related issue number

Checks

  • I've signed off every commit(by using the -s flag, i.e., git commit -s) in this PR.
  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
    • I've added any new APIs to the API Reference. For example, if I added a
      method in Tune, I've added it in doc/source/tune/api/ under the
      corresponding .rst file.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

Signed-off-by: Ryan O'Leary <[email protected]>
Signed-off-by: Ryan O'Leary <[email protected]>
@ryanaoleary
Copy link
Contributor Author

cc: @andrewsykim

@kevin85421 kevin85421 self-assigned this Sep 25, 2024
Signed-off-by: Ryan O'Leary <[email protected]>
Signed-off-by: Ryan O'Leary <[email protected]>
@ryanaoleary ryanaoleary changed the title RayServe Multi-Host TPU Example with vLLM [Doc] RayServe Multi-Host TPU Example with vLLM Oct 3, 2024
Copy link

stale bot commented Feb 1, 2025

This pull request has been automatically marked as stale because it has not had recent activity. It will be closed in 14 days if no further activity occurs. Thank you for your contributions.

  • If you'd like to keep this open, just leave any comment, and the stale label will be removed.

@stale stale bot added the stale The issue is stale. It will be closed within 7 days unless there are further conversation label Feb 1, 2025
@stale stale bot removed the stale The issue is stale. It will be closed within 7 days unless there are further conversation label Feb 4, 2025
@ryanaoleary ryanaoleary changed the title [Doc] RayServe Multi-Host TPU Example with vLLM [Doc] RayServe Single-Host TPU v5e and v6e Example with vLLM Feb 4, 2025
Signed-off-by: Ryan O'Leary <[email protected]>
@ryanaoleary ryanaoleary changed the title [Doc] RayServe Single-Host TPU v5e and v6e Example with vLLM [Doc] RayServe Single-Host TPU v6e Example with vLLM Feb 4, 2025
Signed-off-by: Ryan O'Leary <[email protected]>
Signed-off-by: Ryan O'Leary <[email protected]>
@ryanaoleary
Copy link
Contributor Author

I went through and updated this guide for the newer TPU Trillium so that it'll be more useful, it no longer requires multi-host since v6e can fit Llama 70B on a single node. I'll create a PR with a separate guide showcasing serving with Llama 405B and multi-host TPUs. cc: @andrewsykim @kevin85421

Signed-off-by: Ryan O'Leary <[email protected]>
Signed-off-by: Ryan O'Leary <[email protected]>
Signed-off-by: Ryan O'Leary <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants