Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Frontend] Adding the "User Defined Custom Tool Calling" parser for the Llama models #12752

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

lulmer
Copy link

@lulmer lulmer commented Feb 4, 2025

Description

The current Llama tool parsing in vLLM is based on the JSON based tool calling using the procedure given by Meta. However, another tool parsing strategy is mentioned on this same website : The User Defined Custom Tool Calling.

The gain is substantial : After testing this approach as a plugin on a private function calling benchmark (more than 120 different scenarios tested with a set of 30 complex and lengthy fintech tool definitions), I observed significantly higher function-calling accuracy compared to the current JSON-based tool parser. I also run some experiments on the BFCL benchmark (AST non-live bench) and could observe the same types of improvements:

Model AST Summary Simple AST Python Simple AST Java Simple AST JavaScript Simple AST Multiple AST Parallel AST Parallel Multiple AST
Meta-Llama-3.1-8B-Instruct-FC (json tool call) 43.33% 56.33% 39.00% 60.00% 70.00% 29.50% 30.00% 18.50%
Meta-Llama-3.1-8B-Instruct-(usr def custom tc) 72.98% 67.58% 89.75% 57.00% 56.00% 86.50% 78.50% 75.50%

This PR introduces a new Llama3UserDefinedCustomToolParser class that extends the ToolParser base class. The new parser allows for streaming support when using custom tools with the Llama models. It handles the extraction of tool calls and arguments from the model's response in streaming too, enabling real-time processing of tool calls.

The flow looks like this :
image

Main Changes

  1. New Parser Class: The Llama3UserDefinedCustomToolParser class is added to handle streaming tool calls for Llama models.
  2. New Chat Template: An example of chat template that works well with this parser can be found here : vllm/examples/tool_chat_template_llama3.1_usr_def_tool_call.jinja

Remarks

This is my first PR on the vLLM project and I believe there is still some stuff I need guidance on :

  • What is the testing process ? I don't really know how to proceed
  • This parser requires a chat template that is different from the one that has been written for the JSON based tool calling although I provided it in the example directory I don't know how to indicate to the user how to use it.
  • There might be some edge case I am not covering when handling the streaming tool parsing.
  • I noticed there is a refactoring issue for the frontend going on [RFC]: Refactor tool parsers to eliminate coding errors and allow more efficient implementations. #11522 and discussion over potential benefits of using FSM for tool parsers, I don't know how it could impact the work here but I am happy to discuss potential changes.

Copy link

github-actions bot commented Feb 4, 2025

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@mergify mergify bot added the frontend label Feb 4, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant