Skip to content

Conversation

laocheyujie
Copy link
Contributor

PR type

  • Bug Fix
  • New Feature
  • Document Updates
  • More Models or Datasets Support

PR information

This PR adds support for GLM-4.5 agent (function call) training in ms-swift.

Key updates:

  1. Added GLM4_5AgentTemplate in plugin/agent_template/glm4.py to support training GLM-4.5 function call agents.
  2. Enhanced suffix appending logic: in the final generation round, check if the output already ends with a stop word, and skip appending the suffix if so to avoid redundancy.

Experiment results

  • Training successfully runs on 8×A100 (80GB) with GLM-4.5 agent configuration.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @laocheyujie, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

I've implemented support for GLM-4.5 agent training, specifically enabling function call capabilities within the ms-swift project. This involved integrating a new agent template and refining the text generation process to ensure more accurate and efficient output by preventing redundant suffixes. The changes aim to expand the framework's compatibility with advanced large language models and their interactive features.

Highlights

  • GLM-4.5 Agent Support: I've added comprehensive support for GLM-4.5 agent (function call) training within the ms-swift framework. This includes defining a new template and agent-specific logic for GLM-4.5 agents.
  • Suffix Appending Logic Enhancement: I've refined the suffix appending logic during the final generation round. The system now checks if the output already ends with a stop word, preventing redundant suffix additions and improving output quality.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for the GLM-4.5 agent, including a new agent template and updates to the suffix appending logic. The changes are well-structured and address the goal of the PR. I've provided a few suggestions to improve code clarity, style, and performance.

@@ -1135,7 +1135,8 @@ def _swift_encode(self, inputs: StdTemplateInputs):
context_list.append('{{RESPONSE}}')
# self.is_training needed because we may want to continue generation from
# the current response
if self.is_training and not sep_token or self.task_type == 'embedding':
string_stop_words = tuple(s for s in template_meta.stop_words if isinstance(s, str))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the reason for this modification here?

Copy link
Contributor Author

@laocheyujie laocheyujie Aug 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Originally, I made that change to prevent models like GLM-4.5 from producing <|observation|><|user|> when the final round was a tool_call. But I just realized this can be handled more cleanly by overriding _swift_encode in the GLM4_5Template class, so I removed that part of the modification in my latest commit.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello!
Here, <|user|> (suffix) is only added during training. Could you please clarify: in what training scenarios would the final round include <|observation|> but not tool_response?

@Jintao-Huang
Copy link
Collaborator

Jintao-Huang commented Aug 24, 2025

Hello, please run the following code to pass the lint test.

pip install pre-commit
pre-commit run --all-files

@Jintao-Huang Jintao-Huang merged commit f9a925a into modelscope:main Aug 25, 2025
1 of 2 checks passed
@Jintao-Huang
Copy link
Collaborator

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for GLM-4.5 agent function call training. The changes include a new GLM4_5AgentTemplate for handling the specific tool call format of GLM-4.5, along with corresponding template metadata. The logic for suffix appending is also enhanced to prevent redundant tokens. The implementation is well-structured and follows existing patterns in the codebase. I've identified a few areas for improvement regarding code robustness and readability in the new agent template implementation.

Comment on lines +144 to +145
for arg_key, arg_value in tool_call['arguments'].items():
tool_calls.append(f'<arg_key>{arg_key}</arg_key>\n<arg_value>{arg_value}</arg_value>')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The code assumes that tool_call['arguments'] is always a dictionary, which may not be guaranteed by _parse_tool_call. If arguments is not a dictionary, tool_call['arguments'].items() will raise a runtime error. It's safer to check if arguments is a dictionary before iterating over it to make the function more robust against malformed inputs.

Suggested change
for arg_key, arg_value in tool_call['arguments'].items():
tool_calls.append(f'<arg_key>{arg_key}</arg_key>\n<arg_value>{arg_value}</arg_value>')
arguments = tool_call.get('arguments')
if isinstance(arguments, dict):
for arg_key, arg_value in arguments.items():
tool_calls.append(f'<arg_key>{arg_key}</arg_key>\n<arg_value>{arg_value}</arg_value>')

Comment on lines +100 to +104
functions = []
for toolcall in toolcall_list:
function = self._find_function_call(toolcall)
if function:
functions.append(function)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The loop for collecting functions can be made more concise and Pythonic by using a list comprehension with a walrus operator (PEP 572). This can improve readability and reduce boilerplate code.

Suggested change
functions = []
for toolcall in toolcall_list:
function = self._find_function_call(toolcall)
if function:
functions.append(function)
functions = [func for toolcall in toolcall_list if (func := self._find_function_call(toolcall))]

if with_action:
return super()._format_tool_responses(assistant_content, tool_messages)
res = []
for _, tool_message in enumerate(tool_messages):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The use of enumerate here is unnecessary as the index _ is not used. You can directly iterate over tool_messages for cleaner and more idiomatic code.

Suggested change
for _, tool_message in enumerate(tool_messages):
for tool_message in tool_messages:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants