Skip to content

Proposal: An alternative to chat templates #6726

Closed
@kaizau

Description

@kaizau

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • I am running the latest code. Development is very rapid so there are no tagged versions as of now.
  • I carefully followed the README.md.
  • I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • I reviewed the Discussions, and have a new bug or useful enhancement to share.

Feature Description

Please provide a detailed written description of what you were trying to do, and what you expected llama.cpp to do as an enhancement.

Jinja template support has already been discussed extensively, and I'd place the main tension between:

  1. Keeping llama.cpp simple and maintainable
  2. Flexibly supporting a variety of current and future templates

I'm opening this issue to propose an alternative that potentially satisfies both. As a placeholder, let's call it role templates instead of chat templates:

std::unordered_map<std::string, std::string> chatML = {
  {"system",    "<|im_start|>system\n{{content}}<|im_end|>\n"},
  {"user",      "<|im_start|>user\n{{content}}<|im_end|>\n"},
  {"assistant", "<|im_start|>assistant\n{{content}}<|im_end|>\n"},
};

std::unordered_map<std::string, std::string> gemma = {
  // Models with no system message could just prepend to the first user message
  {"user",        "<start_of_turn>user\n{{content}}<end_of_turn>\n"},
  {"assistant",   "<start_of_turn>model\n{{content}}<end_of_turn>\n"},
};

std::unordered_map<std::string, std::string> mistral = {
  // Could have special "roles" for common exceptions to the pattern
  {"__begin",     "<s>"},
  {"user",        "[INST] {{content}} [/INST]"},
  {"assistant",   "{{content}}</s>"},
};

std::unordered_map<std::string, std::string> researchExperiment = {
  // Flexible enough to support whatever crazy template comes out next week
  {"user",        "<|user|>{{content}}<|end_user|>\n"},
  {"system_1",    "<|fast|>{{content}}<|end_fast|>\n"},
  {"system_2",    "<|slow|>{{content}}<|end_slow|>\n"},
  {"agent",       "<|agent|>{{content}}<|end_agent|>\n"},
  {"retriever",   "<|rag|>{{content}}<|end_rag|>\n"},
};

Just loop through the messages, get the corresponding role, and find-replace {{content}}. And add_generation_prompt is just the substring in front of the next message's {{content}}.

This format itself could be anything — JSON, YAML, key-value pairs — making it easy to adopt in non-llama.cpp contexts as well.

Motivation

Please provide a detailed written description of reasons why this feature is necessary and how it is useful to llama.cpp users.

For llama.cpp maintainers / model authors:

  • It flattens the complexity of Jinja into a simple find-replace operation. But it's still flexible enough to handle most (all?) templates.
  • Similar to Jinja, it gives model authors control and responsibility over formatting, instead of needing others to translate their work into this and other projects.
  • Even if model authors are slow to adopt the format, it could be added to GGUF conversion as a suggested part of the process.

For end users:

  • New models should "just work" with a much greater frequency.
  • This could be exposed as a config option to allow providing custom role templates.

For client apps / front ends:

It's a viable alternative to the current state, where every chat client maintains its own library of chat templates, while using llama.cpp's completion API. The fact that llama.cpp doesn't support all templates, means that every downstream chat client still needs to reinvent the wheel.

For open models, in general:

Personally, my experience adding chat templates opened my eyes to just how messy the template landscape is right now. Open models don't just lag in scale, but also have to deal with compatibility and usability issues that the closed models can sidestep.

Chat templates feel like an important thing to get right, and I think llama.cpp can greatly simplify this for the many projects that depend on it.

Possible Implementation

If you have an idea as to how it can be implemented, please write a detailed description. Feel free to give links to external sources or share visuals that might be helpful to understand the details better.

  • I'd lean towards starting with a python script that loads metadata from a diverse set of models, renders their Jinja templates, and generates a set of tests to validate whether this approach can handle all cases. Basically, an addition / expansion to tests/test-chat-template.
  • llama_chat_apply_template_internal could be refactored to use role templates under-the-hood so that the existing --chat-template flag still works.
  • Potentially has implications for Implement (properly) different chat templates in main.cpp #6391

Happy to submit a PR or collaborate if this is a direction folks are interested in.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions