various tinkering scripts and tools for experimenting with vision‑language models (VLMs), with a focus on providing more control and consistency in llava-based models, primarily looking at control vectors and modifying a model’s weights directions and scales by layer.
While the code in here is MIT, it's heavily borrowed from other control vector / abliteration projects.
This license does not apply to xtuner, llava, llama, and hunyuanvideo models, which have their own licenses and terms here: