Skip to content

[Frontend] Adding the "User Defined Custom Tool Calling" parser for the Llama models #12752

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1,243 commits into
base: main
Choose a base branch
from

Conversation

lulmer
Copy link

@lulmer lulmer commented Feb 4, 2025

Description

The current Llama tool parsing in vLLM is based on the JSON based tool calling using the procedure given by Meta. However, another tool parsing strategy is mentioned on this same website : The User Defined Custom Tool Calling.

The gain is substantial : After testing this approach as a plugin on a private function calling benchmark (more than 120 different scenarios tested with a set of 30 complex and lengthy fintech tool definitions), I observed significantly higher function-calling accuracy compared to the current JSON-based tool parser. I also run some experiments on the BFCL benchmark (AST non-live bench) and could observe the same types of improvements:

Model AST Summary Simple AST Python Simple AST Java Simple AST JavaScript Simple AST Multiple AST Parallel AST Parallel Multiple AST
Meta-Llama-3.1-8B-Instruct-FC (json tool call) 43.33% 56.33% 39.00% 60.00% 70.00% 29.50% 30.00% 18.50%
Meta-Llama-3.1-8B-Instruct-(usr def custom tc) 72.98% 67.58% 89.75% 57.00% 56.00% 86.50% 78.50% 75.50%

This PR introduces a new Llama3UserDefinedCustomToolParser class that extends the ToolParser base class. The new parser allows for streaming support when using custom tools with the Llama models. It handles the extraction of tool calls and arguments from the model's response in streaming too, enabling real-time processing of tool calls.

The flow looks like this :
image

Main Changes

  1. New Parser Class: The Llama3UserDefinedCustomToolParser class is added to handle streaming tool calls for Llama models.
  2. New Chat Template: An example of chat template that works well with this parser can be found here : vllm/examples/tool_chat_template_llama3.1_usr_def_tool_call.jinja

Remarks

This is my first PR on the vLLM project and I believe there is still some stuff I need guidance on :

  • What is the testing process ? I don't really know how to proceed
  • This parser requires a chat template that is different from the one that has been written for the JSON based tool calling although I provided it in the example directory I don't know how to indicate to the user how to use it.
  • There might be some edge case I am not covering when handling the streaming tool parsing.
  • I noticed there is a refactoring issue for the frontend going on [RFC]: Refactor tool parsers to eliminate coding errors and allow more efficient implementations. #11522 and discussion over potential benefits of using FSM for tool parsers, I don't know how it could impact the work here but I am happy to discuss potential changes.

Copy link

github-actions bot commented Feb 4, 2025

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@mergify mergify bot added the frontend label Feb 4, 2025
@mergify mergify bot added the documentation Improvements or additions to documentation label Feb 28, 2025
@paolovic
Copy link
Contributor

paolovic commented Apr 6, 2025

Hi @lulmer ,
I fixed the causes for the failing tests here: https://github.com/paolovic/vllm/tree/feature/llama_tool_calling .
Do you want to incorporate them into your PR or should I create a separate one?

Copy link

mergify bot commented Apr 6, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @lulmer.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@lulmer
Copy link
Author

lulmer commented Apr 6, 2025

Hi @lulmer , I fixed the causes for the failing tests here: https://github.com/paolovic/vllm/tree/feature/llama_tool_calling . Do you want to incorporate them into your PR or should I create a separate one?

Thank you for your indications @paolovic, I didn't noticed I needed to address linting issues, I updated my branch accordingly as it was simple. I rebased with the current branch as well.

@paolovic
Copy link
Contributor

paolovic commented Apr 6, 2025

Hi @lulmer ,

no problem, please don't forget to sign-off your commits with -s, e.g.
git commit -m "this is an example commit" -s, otherwise DCO will fail.

As it did now.
I think you can fix it with the instructions given here under "Rebase the branch"

njhill and others added 21 commits April 7, 2025 07:38
Signed-off-by: Alexander Matveev <[email protected]>
Signed-off-by: Louis Ulmer <[email protected]>
Signed-off-by: ElizaWszola <[email protected]>
Signed-off-by: ElizaWszola <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Louis Ulmer <[email protected]>
Signed-off-by: Chenyaaang <[email protected]>
Signed-off-by: Louis Ulmer <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Louis Ulmer <[email protected]>
Signed-off-by: Cody Yu <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Signed-off-by: Louis Ulmer <[email protected]>
Signed-off-by: Gregory Shtrasberg <[email protected]>
Signed-off-by: Louis Ulmer <[email protected]>
@mergify mergify bot added ci/build multi-modality Related to multi-modality (#4194) structured-output speculative-decoding v1 tpu Related to Google TPUs labels Apr 7, 2025
Copy link

mergify bot commented Apr 7, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @lulmer.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Apr 7, 2025
@mergify mergify bot removed tpu Related to Google TPUs needs-rebase labels Apr 7, 2025
@lulmer
Copy link
Author

lulmer commented Apr 7, 2025

Hi @lulmer ,

no problem, please don't forget to sign-off your commits with -s, e.g. git commit -m "this is an example commit" -s, otherwise DCO will fail.

As it did now. I think you can fix it with the instructions given here under "Rebase the branch"

@paolovic Thank you, I followed the procedure you've pinpointed, now the DCO is passing, but a lot of unrelated files/labels have been added in this PR, I had to solve a merge conflict (I systematically chose the latest updates from main branch) and add back pre commits for
examples/offline_inference/mistral-small.py.
Also I've noticed v1/core/kv_cache_manager.py is now part of the modified files, I hope it is expected and won't be a problem for the review. Otherwise let me know what I should do next.

@paolovic
Copy link
Contributor

paolovic commented Apr 7, 2025

Hi @lulmer ,
is it possible that you are, unintentionally, try to push an outdated version of v1/core/kv_cache_manager.py to vllm:main?
If so, please make sure to sync your fork like so:

  1. Go to your fork
  2. Sync fork (on the top right corner)
  3. Update Branch (Green Button)

@lulmer
Copy link
Author

lulmer commented Apr 7, 2025

Hi @lulmer , is it possible that you are, unintentionally, try to push an outdated version of v1/core/kv_cache_manager.py to vllm:main? If so, please make sure to sync your fork like so:

  1. Go to your fork
  2. Sync fork (on the top right corner)
  3. Update Branch (Green Button)

I ensured that I synced my fork when I did the rebase (If you see the history, I got told by Mergebot), on this commit

When I go to my fork on the github UI, the right tab sync fork says my fork is up to date with the latest main branch, see below.
image

@lulmer
Copy link
Author

lulmer commented Apr 10, 2025

@paolovic is there any additional steps I should do now ?

@paolovic
Copy link
Contributor

@lulmer tbh: i would close this PR and create a new, clean one where v1/core/kv_cache_manager.py is not listed as a changed file.

furthermore, fix the failing checks. without them it won't be merged.

FYI: I cannot and won't do the code review.

@lulmer
Copy link
Author

lulmer commented Apr 23, 2025

@paolovic finally, I managed to remove the changes from v1/core/kv_cache_manager.py. I think it is fine now

@paolovic
Copy link
Contributor

paolovic commented Apr 23, 2025

@lulmer nice, super cool!

By the way could you provide an example for me how to use it?

edit: I guess something like this

vllm serve --model meta/llama... \
            --chat-template examples/usr_defined... \
            --enable-auto-tool-choice --tool-call-parser llama

@lulmer
Copy link
Author

lulmer commented Apr 23, 2025

@lulmer nice, super cool!

By the way could you provide an example for me how to use it?

edit: I guess something like this

vllm serve --model meta/llama... \
            --chat-template examples/usr_defined... \
            --enable-auto-tool-choice --tool-call-parser llama

This is a way to use it !

I always developed it as an external plugin but I would assume it can be launched this way:

vllm serve meta-llama/Llama-3.1-8B-Instruct \
    --enable-auto-tool-choice \
    --tool-call-parser llama3_user_defined_custom \
    --chat-template examples/tool_chat_template_llama3.1_usr_def_tool_call.jinja

Copy link

mergify bot commented May 12, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @lulmer.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label May 12, 2025
@mergify mergify bot added the llama Related to Llama models label Jun 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci/build documentation Improvements or additions to documentation frontend llama Related to Llama models multi-modality Related to multi-modality (#4194) needs-rebase speculative-decoding structured-output tool-calling v1
Projects
Status: No status
Status: No status
Development

Successfully merging this pull request may close these issues.