Skip to content

Add metadata override and also generate dynamic default filename when converting gguf #7165

Closed
@mofosyne

Description

@mofosyne

This is a formalized ticket for this PR #4858 so people are aware and can contribute to figuring out if this idea makes sense... and if so then what needs to be done before this can be merged in from a feature requirement perspective.

Feature Description and Motivation

Metadata Override

Often safetensors provided by external parties maybe missing certain metadata or have incorrectly formatted metadata. To make things easier to find in hugging face, accurate metadata is a must.

The idea is to allow users to override metadata in the generated gguf by including a json metadata file

./llama.cpp/convert.py maykeye_tinyllama --outtype f16 --metadata maykeye_tinyllama-metadata.json

where the metadata override file may look like:

{
    "general.name": "TinyLLama",
    "general.version": "v0",
    "general.author": "mofosyne",
    "general.url": "https://huggingface.co/mofosyne/TinyLLama-v0-llamafile",
    "general.description": "This gguf is ported from a first version of Maykeye attempt at recreating roneneldan/TinyStories-1M but using Llama architecture",
    "general.license": "apache-2.0",
    "general.source.url": "https://huggingface.co/Maykeye/TinyLLama-v0",
    "general.source.huggingface.repository": "https://huggingface.co/Maykeye/TinyLLama-v0"
}

At the moment the PR will only recognize metadata that is explicitly defined in python gguf writer, so any that is not yet defined will be ignored. If you think that should not be the case, then definitely make your case and I'll may see how easy it is to just allow for user defined metadata in the metadata override json.

Default outfile name generation

To help promote consistent naming scheme I've created a --get-outfile and also adjusted the default file naming function to be based on this format <Model>-<Version>-<ExpertsCount>x<Parameters>-<Quantization>.gguf (detailed description in PR).

So for example when you call this command

./llama.cpp/convert.py ${MODEL_DIR} --metadata ${METADATA_FILE} --outtype f16 --get-outfile

you would get

TinyLLama-v0-5M-F16

Also when generating a gguf, if you don't name your output file it will default to a file that may look similar to TinyLLama-v0-5M-F16.gguf.

This format is based on what I've generally observed in how people name their files in huggingface (using vibes, so if you think my naming scheme needs adjusting, let me know).

Possible Implementation

I have already tested the overall flow when generating this via this bash script https://huggingface.co/mofosyne/TinyLLama-v0-5M-F16-llamafile/blob/main/llamafile-creation.sh using convert.py and we already have a PR #4858 waiting for merging.

I've already merged in all the changes required to support this so the only changes that needs to be checked is convert.py (Other scripts may need to be adjusted to port over the file conventions and metadata... but just focusing on convert.py as that's the lowest hanging fruit and a good MVP to see if it makes sense in the real world). This should make it easier to review and then merge.

Metadata

Metadata

Assignees

Labels

enhancementNew feature or requesthelp wantedNeeds help from the communityneed feedbackTesting and feedback with results are needed

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions