Releases: ljleb/sd-mecha
Releases · ljleb/sd-mecha
0.1.0rc1
- new merge method API (
@merge_method
,Parameter(...)
,Return(...)
) - new model config API (get rid of blocks embedded in configs in favor of additional blocks configs + conversion)
- replace block-wise merging with key-wise merging
- it is now possible to implement all lycoris types (but be aware that only kohya lora is currently implemented)
- new merge method
sd_mecha.fallback(a, b)
(delegate keys that are missing ina
tob
) - new merge method
sd_mecha.cast(a, *, device, dtype)
that replaces the olddevice
anddtype
kwargs that were automatically added - unification of args and kwargs of merge methods: there is now no special meaning attributed to how a parameter is passed
- remove "hyper"s
- introduction of
param
merge space, which corresponds to the old notion of "hyper". (merge method parameters with a default value are automatically inparam
merge space) - introduction of the
StateDict[...]
parameter type, which allows to load arbitrary keys from an input parameter for each output key to be merged (it also allows to not load a key if it is not needed (i.e. weighted_sum with alpha=1.0 doesn't need to load model A into memory at all)) - bump serialization format version (it is not backwards compatible with the old format version but can easily be converted)
None
,Tensor
,bool
,int
,float
,str
, aTypeVar
with constraints that are a subset of these types, andStateDict[Tensor, bool, int, float, str or a TypeVar]
are all valid merge method parameter types now (and will also be serialized and deserialized properly)- a cache mechanism is now built into each merge method and can be enabled or disabled at will
- automatic config conversion using
sd_mecha.convert
- automatic model config detection (no need to specify "sdxl" in
sd_mecha.model
now) - rename "sd1" model config to "sd1-ldm", and "sdxl" to "sdxl-sgm"
- many other things, see docs or hop in the Discord for more info
Note: the yaml configs listed under here are necessary for the code to work. They are pre-included in the pypi release. To install from source using pip install -e .
, they must be placed under sd_mecha/extensions/builtin/model_configs
. This is a temporary measure, until configs are a part of a public web API that is under construction.
0.0.29
0.0.27
0.0.26
- allow to merge in delta space by passing
strict_weight_space=False
inRecipeMerger.merge_and_save()
- handle unexpected float types when expecting an integer in some merge methods
- replace the parameter
no_rescale
of the methodties_sum_with_dropout
withrescale
and use a differentiable implementation - use
torch.Generator
instead oftorch.manual_seed
- revert the use of
torch.svd_lowrank
in rotate when alignment is fractional, otherwise it results in nans during inference
0.0.25
0.0.24
0.0.23
0.0.21
- add new builtin methods:
n_average
,geometric_median
,ties_sum_extended
/add_difference_ties_extended
,ties_sum_with_dropout
/ties_with_dare
,model_stock_for_tensor
/model_stock_n_models
- add a new parameter
vote_sgn
toties_sum
/add_differenc_ties
credits to @6DammK9 for this contribution!
0.0.20
- revert
sd_mecha.train_difference
to the original implementation from supermerger - add 3 new methods
add_opposite
,clamped_add_opposite
andselect_max_delta
Note that clamped_add_opposite
corresponds to the implementation of train difference in 0.0.19 and earlier versions.
0.0.17
- rename method
clip
toclamp
to disambiguate it from "CLIP" the text encoder - I originally mislabelled the tag as
0.17
and I am too lazy to fix the commit target after creating the new0.0.17
tag (the github ui doesn't allow me to enter a commit hash) so here is the appropriate commit hash of the release: f8b0800