Skip to content

Conversation

@Jacksunwei
Copy link
Collaborator

Summary

This PR adds a new RateLimitPlugin that enforces global rate limiting across all LLM models using a sliding window algorithm.

Features

  • Global rate limiting: Default 15 queries per minute (QPM) across all models
  • Sliding window algorithm: Accurate 60-second tracking window
  • Blocking behavior: Waits when limit exceeded (no errors thrown)
  • Thread-safe: Uses asyncio locks for concurrent request handling
  • Auto-cleanup: Automatically removes expired timestamps

Implementation Details

New files:

  • src/google/adk/plugins/rate_limit_plugin.py - Main plugin implementation
  • tests/unittests/plugins/test_rate_limit_plugin.py - Comprehensive test suite (7 tests)

Modified files:

  • src/google/adk/plugins/__init__.py - Exported RateLimitPlugin in public API

Usage Example

from google.adk import Agent, Runner
from google.adk.plugins import RateLimitPlugin

agent = Agent(name="assistant", model="gemini-2.5-flash", ...)

runner = Runner(
    agents=[agent],
    plugins=[
        RateLimitPlugin(max_requests_per_minute=15)
    ]
)

Test Coverage

All 7 tests pass successfully:

  • ✅ Requests within limit are allowed
  • ✅ Global tracking across all models
  • ✅ Sliding window cleanup of expired timestamps
  • ✅ Blocking behavior when limit exceeded
  • ✅ Thread-safe concurrent request handling
  • ✅ Default parameters (15 QPM)

Test Plan

  • Run unit tests: pytest tests/unittests/plugins/test_rate_limit_plugin.py -v
  • Verify plugin imports correctly: from google.adk.plugins import RateLimitPlugin
  • Code formatted with isort and pyink
  • All 7 tests passing

Related Issues

Implements global rate limiting for LLM requests to prevent quota exhaustion and ensure fair resource usage.

Add a new RateLimitPlugin that enforces global rate limiting across all
LLM models using a sliding window algorithm. The plugin blocks (waits)
when the rate limit is exceeded, ensuring requests are processed within
the configured limit.

Key features:
- Global rate limiting (default 15 QPM) across all models
- Sliding window algorithm for accurate tracking
- Automatic blocking when limit exceeded (no errors thrown)
- Thread-safe with asyncio locks
- Automatic cleanup of expired timestamps

Example usage:
```python
from google.adk.plugins import RateLimitPlugin

runner = Runner(
    agents=[agent],
    plugins=[RateLimitPlugin(max_requests_per_minute=15)]
)
```
@gemini-code-assist
Copy link

Summary of Changes

Hello @Jacksunwei, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request integrates a RateLimitPlugin into the system, providing a robust mechanism for global rate limiting of LLM requests. By employing a sliding window approach and asynchronous locking, it ensures that API usage remains within specified limits while gracefully handling bursts of requests by pausing execution until capacity is available. This enhancement is crucial for maintaining service stability and adhering to external API quotas.

Highlights

  • New RateLimitPlugin: Introduces a new RateLimitPlugin to enforce global request rate limiting for LLM models, preventing quota exhaustion and ensuring fair resource usage.
  • Sliding Window Algorithm: The plugin utilizes a sliding window algorithm for accurate 60-second tracking of requests, automatically cleaning up expired timestamps.
  • Blocking Behavior: When the configured rate limit is exceeded, the plugin blocks (waits) for a slot to become available rather than throwing errors, ensuring requests are eventually processed.
  • Thread-Safe Implementation: The plugin is designed to be thread-safe, using asyncio.Lock to handle concurrent requests reliably.
  • Default Configuration: The RateLimitPlugin defaults to a global limit of 15 queries per minute (QPM) across all models, which can be customized during initialization.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@adk-bot adk-bot added the core [Component] This issue is related to the core interface and implementation label Nov 5, 2025
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a RateLimitPlugin to enforce global rate limiting for LLM requests. The implementation uses a sliding window algorithm and is designed to be thread-safe with asyncio.Lock. While the feature is valuable and the test coverage is good, I've identified a critical race condition in the core rate-limiting logic. The current implementation can allow the rate limit to be exceeded under concurrent loads. My review includes a detailed explanation of the issue and a code suggestion to refactor the logic, which resolves the race condition and simplifies the implementation.

Comment on lines +92 to +163
async def _wait_for_rate_limit(self, current_time: float) -> None:
"""Wait until a request slot becomes available.

Args:
current_time: Current time in seconds since epoch.
"""
while True:
async with self._lock:
timestamps = self._clean_old_timestamps(
self._request_timestamps, time.time()
)
self._request_timestamps = timestamps

if len(timestamps) < self.max_requests:
# Slot available, exit loop
return

# Calculate wait time until the oldest request falls outside the window
oldest_timestamp = timestamps[0]
wait_seconds = 60.0 - (time.time() - oldest_timestamp) + 0.1

# Wait outside the lock to allow other operations
if wait_seconds > 0:
await asyncio.sleep(wait_seconds)
else:
# Re-check immediately
await asyncio.sleep(0.01)

async def before_model_callback(
self, *, callback_context: CallbackContext, llm_request: LlmRequest
) -> Optional[LlmResponse]:
"""Check and enforce rate limits before each LLM request.

This callback is invoked before every LLM request. It checks whether
the request would exceed the configured global rate limit across all models.
If so, it blocks (waits) until the rate limit allows the request.

Args:
callback_context: Context containing agent, user, and session information.
llm_request: The LLM request that is about to be sent.

Returns:
None to allow the request to proceed (after waiting if necessary).
"""
current_time = time.time()

async with self._lock:
# Clean old timestamps
timestamps = self._clean_old_timestamps(
self._request_timestamps, current_time
)
self._request_timestamps = timestamps

# Check if rate limit would be exceeded
if len(timestamps) >= self.max_requests:
# Need to wait
pass
else:
# Slot available, record and proceed
self._request_timestamps.append(current_time)
return None

# Wait for availability if limit exceeded
await self._wait_for_rate_limit(current_time)

# Record this request after waiting
async with self._lock:
current_time = time.time()
self._request_timestamps.append(current_time)

# Allow request to proceed
return None

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The current implementation of the rate limiter has a critical race condition that can cause the rate limit to be exceeded under concurrent load. The before_model_callback method checks the limit, but if it's exceeded, it releases the lock before waiting. After waiting, it re-acquires the lock and appends a timestamp without re-validating the limit. Another request could have taken the available slot in the meantime, leading to more requests being processed than allowed.

To fix this and simplify the logic, the check and the action (appending the timestamp) should be atomic. This can be achieved by refactoring the logic into a single loop within before_model_callback and removing the separate _wait_for_rate_limit method. The proposed suggestion consolidates the logic, making it both correct and easier to understand.

  async def before_model_callback(
      self, *, callback_context: CallbackContext, llm_request: LlmRequest
  ) -> Optional[LlmResponse]:
    """Check and enforce rate limits before each LLM request.

    This callback is invoked before every LLM request. It checks whether
    the request would exceed the configured global rate limit across all models.
    If so, it blocks (waits) until the rate limit allows the request.

    Args:
      callback_context: Context containing agent, user, and session information.
      llm_request: The LLM request that is about to be sent.

    Returns:
      None to allow the request to proceed (after waiting if necessary).
    """
    while True:
      async with self._lock:
        current_time = time.time()
        self._request_timestamps = self._clean_old_timestamps(
            self._request_timestamps, current_time
        )

        if len(self._request_timestamps) < self.max_requests:
          self._request_timestamps.append(current_time)
          return None  # Allow request to proceed

        # Rate limit is active, calculate necessary wait time.
        # Timestamps are sorted, so the oldest is at the front.
        oldest_timestamp = self._request_timestamps[0]
        wait_seconds = 60.0 - (current_time - oldest_timestamp) + 0.1

      # Wait outside the lock to avoid blocking other coroutines.
      if wait_seconds > 0:
        await asyncio.sleep(wait_seconds)
      else:
        # A small sleep to prevent a tight loop if wait_seconds is negative.
        await asyncio.sleep(0.01)

@ryanaiagent ryanaiagent self-assigned this Nov 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

core [Component] This issue is related to the core interface and implementation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants