Skip to content

Conversation

@praveenhosdrug123
Copy link
Contributor

@praveenhosdrug123 praveenhosdrug123 commented Oct 17, 2025

Summary

Applies distributed initialization fix to model backbone to resolve OOM errors during initialization of 7B+ parameter models on 8GB TPU devices. This PR adds a helper function to distribute the initializers at time of instantiation.

Issue

Token embedding initialization creates large arrays at time of creation, placing all weights on a single device.
Combined with forward passes during backbone initialization, this causes a 2X to 3X memory spike and triggers OOM on TPUs with limited HBM.

Solution

Implements _distribute_initializer helper that wraps embedding initializers with explicit TensorLayout, properly sharding weights across TPU devices during instantiation. Validated on 8-device TPU: models that previously OOM'd during backbone initialization now load successfully.

Reference

For memory profiling analysis, cache locality theory, validation logs and alternative solutions considered, refer to: Doc

Related PR: keras-team/keras-hub#2441

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @praveenhosdrug123, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request implements a crucial fix to prevent Out-Of-Memory errors during the initialization of very large Keras models, particularly on resource-constrained TPU devices. By introducing a mechanism to distribute the initialization of token embeddings, it ensures that model weights are sharded across available devices from the outset, significantly reducing memory consumption and enabling the successful loading of models that previously failed.

Highlights

  • OOM Resolution for Large Models: Addresses Out-Of-Memory (OOM) errors encountered when initializing 7B+ parameter models on 8GB TPU devices by distributing token embedding initialization.
  • New Helper Function _distribute_initializer: Introduces _distribute_initializer in keras/src/backend/jax/distribution_lib.py to enable distribution-aware token embedding initialization for the JAX backend.
  • Distributed Initialization Logic: The new helper function wraps JAX random initializers with explicit TensorLayout to properly shard weights across TPU devices during instantiation, preventing memory spikes.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new function, _distribute_initializer, to handle the distribution of token embedding initializers in the JAX backend. This function aims to resolve OOM errors encountered during the initialization of large models on TPUs with limited HBM by sharding weights across TPU devices during instantiation. The code includes argument validation, sharding logic based on tensor layout, and application of mean/stddev for relevant distributions. The review focuses on error handling, code clarity, and adherence to the Keras API design guidelines.

@codecov-commenter
Copy link

codecov-commenter commented Oct 17, 2025

Codecov Report

❌ Patch coverage is 2.94118% with 33 lines in your changes missing coverage. Please review.
✅ Project coverage is 82.60%. Comparing base (e2be4de) to head (b36b051).
⚠️ Report is 10 commits behind head on master.

Files with missing lines Patch % Lines
keras/src/backend/jax/distribution_lib.py 2.94% 33 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master   #21755      +/-   ##
==========================================
- Coverage   82.63%   82.60%   -0.03%     
==========================================
  Files         572      577       +5     
  Lines       58555    59223     +668     
  Branches     9153     9286     +133     
==========================================
+ Hits        48385    48922     +537     
- Misses       7843     7924      +81     
- Partials     2327     2377      +50     
Flag Coverage Δ
keras 82.41% <2.94%> (-0.03%) ⬇️
keras-jax 63.27% <2.94%> (+0.07%) ⬆️
keras-numpy 57.50% <2.94%> (-0.07%) ⬇️
keras-openvino 34.31% <2.94%> (-0.07%) ⬇️
keras-tensorflow 64.04% <2.94%> (+0.07%) ⬆️
keras-torch 63.58% <2.94%> (+0.08%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@hertschuh
Copy link
Collaborator

@praveenhosdrug123

Thank you for the investigation. This is indeed an issue.

Somebody on the team is working on a fix that's generally applicable to all variables so that you don't have to explicitly use the fix that you provided here.

@praveenhosdrug123
Copy link
Contributor Author

@hertschuh - Thanks for the feedback and for taking the time to review the document.

I want to clarify the technical issue:
The OOM problem is about large contiguous memory allocation, not total parameter count. Token embeddings are the largest single array and exceed device memory during initialization, even when the full model would fit after sharding.

Thank you for the context on the general solution. A few follow-up questions to help me understand the timeline:

  1. What's the expected completion date for the general fix?
  2. Will it handle the edge cases mentioned in the document (interleaving, quantization, LoRA)?
  3. Will it detect which variables actually need distribution?

The reason I ask: users are blocked on this today for 7B+ models on 8GB TPU devices.
If the general fix is months out, would it make sense to:

  • Merge this targeted fix as a stopgap
  • Mark it deprecated once the general solution ships
  • Remove it in a future release

Let me know if that's feasible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants