Skip to content

Enable (optional) concurrent execution of function calls in get_function_response_parts_async #2281

@marinburazin

Description

@marinburazin

The problem:

  • Relevant only for async client with enabled automatic function calling
  • Currently, if a LLM returns two or more functions for execution, these will be executed sequentially.
  • Considering these can often be blocking external API calls, the sequential wait for all of them to complete before returning can quickly mount, resulting in increased latency.

Proposed solution

  • Expand AutomaticFunctionCallingConfig and AutomaticFunctionCallingConfigDict with an optional parameter (e.g. run_concurrently), which defaults to None if not provided.
  • Expand get_function_response_parts_async so it makes use of asyncio.gather() if run_concurrently is set to True .

Client-side example

  • In the end, it would look something like this on the client side:
    response = await client.aio.models.generate_content(
        model='gemini-2.5-flash',
        contents='What is weather and traffic situation in Boston?',
        config=types.GenerateContentConfig(
            tools=[get_current_weather, get_current_traffic],
            automatic_function_calling=types.AutomaticFunctionCallingConfig(run_concurrently=True)
        ),
    )

Metadata

Metadata

Assignees

No one assigned

    Labels

    priority: p3Desirable enhancement or fix. May not be included in next release.type: feature request‘Nice-to-have’ improvement, new feature or different behavior or design.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions