Skip to content

Conversation

@Roaimkhan
Copy link

This PR updates the System Requirements section in README.md to provide more concrete GPU guidelines.

  1. Added VRAM ranges for Gemma-2B and Gemma-7B
  2. Included examples of popular GPUs (RTX 3060, 4060, 4090, A6000, H100)
  3. Clarified full precision vs quantized requirements (Float16/BF16 ≈19 GB, INT4 ≈5 GB)

This makes it easier for new users to quickly determine whether their hardware can run Gemma without searching externally.

@google-cla
Copy link

google-cla bot commented Sep 6, 2025

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

* **Quantization**:
* Float16/BF16: ~16 GB model load (plus 20% overhead → ~19 GB total)
* 4-bit (INT4): ~4 GB model load (plus 20% overhead → ~5 GB total)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This improves user convenience by providing GPU compatibility details directly, eliminating the need to search for them separately.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant