Skip to content

Releases: ljleb/sd-mecha

0.1.0rc1

04 Feb 04:23
d61c178
Compare
Choose a tag to compare
0.1.0rc1 Pre-release
Pre-release
  • new merge method API (@merge_method, Parameter(...), Return(...))
  • new model config API (get rid of blocks embedded in configs in favor of additional blocks configs + conversion)
  • replace block-wise merging with key-wise merging
  • it is now possible to implement all lycoris types (but be aware that only kohya lora is currently implemented)
  • new merge method sd_mecha.fallback(a, b) (delegate keys that are missing in a to b)
  • new merge method sd_mecha.cast(a, *, device, dtype) that replaces the old device and dtype kwargs that were automatically added
  • unification of args and kwargs of merge methods: there is now no special meaning attributed to how a parameter is passed
  • remove "hyper"s
  • introduction of param merge space, which corresponds to the old notion of "hyper". (merge method parameters with a default value are automatically in param merge space)
  • introduction of the StateDict[...] parameter type, which allows to load arbitrary keys from an input parameter for each output key to be merged (it also allows to not load a key if it is not needed (i.e. weighted_sum with alpha=1.0 doesn't need to load model A into memory at all))
  • bump serialization format version (it is not backwards compatible with the old format version but can easily be converted)
  • None, Tensor, bool, int, float, str, a TypeVar with constraints that are a subset of these types, and StateDict[Tensor, bool, int, float, str or a TypeVar] are all valid merge method parameter types now (and will also be serialized and deserialized properly)
  • a cache mechanism is now built into each merge method and can be enabled or disabled at will
  • automatic config conversion using sd_mecha.convert
  • automatic model config detection (no need to specify "sdxl" in sd_mecha.model now)
  • rename "sd1" model config to "sd1-ldm", and "sdxl" to "sdxl-sgm"
  • many other things, see docs or hop in the Discord for more info

Note: the yaml configs listed under here are necessary for the code to work. They are pre-included in the pypi release. To install from source using pip install -e ., they must be placed under sd_mecha/extensions/builtin/model_configs. This is a temporary measure, until configs are a part of a public web API that is under construction.

0.0.29

07 Jan 15:43
Compare
Choose a tag to compare
0.0.29 Pre-release
Pre-release
  • forward sdxl v_pred and ztsnr keys, which used to be automatically discarded

0.0.27

28 Oct 05:27
450532e
Compare
Choose a tag to compare
0.0.27 Pre-release
Pre-release
  • add SD 3.5 config

0.0.26

06 Sep 16:21
ef4e793
Compare
Choose a tag to compare
0.0.26 Pre-release
Pre-release
  • allow to merge in delta space by passing strict_weight_space=False in RecipeMerger.merge_and_save()
  • handle unexpected float types when expecting an integer in some merge methods
  • replace the parameter no_rescale of the method ties_sum_with_dropout with rescale and use a differentiable implementation
  • use torch.Generator instead of torch.manual_seed
  • revert the use of torch.svd_lowrank in rotate when alignment is fractional, otherwise it results in nans during inference

0.0.25

03 Aug 06:08
Compare
Choose a tag to compare
0.0.25 Pre-release
Pre-release
  • fix a bug where the model configurations would silently fail to parse
  • fix none seed not supported as a default hyper value

0.0.24

03 Aug 03:48
bdb671d
Compare
Choose a tag to compare
0.0.24 Pre-release
Pre-release
  • remove CWM (sd_mecha.hypers.classes)

0.0.23

02 Aug 17:53
Compare
Choose a tag to compare
0.0.23 Pre-release
Pre-release
  • speedup rotate method by ~2x using torch.svd_lowrank

0.0.21

30 Jul 07:36
Compare
Choose a tag to compare
0.0.21 Pre-release
Pre-release
  • add new builtin methods: n_average, geometric_median, ties_sum_extended / add_difference_ties_extended, ties_sum_with_dropout / ties_with_dare, model_stock_for_tensor / model_stock_n_models
  • add a new parameter vote_sgn to ties_sum / add_differenc_ties

credits to @6DammK9 for this contribution!

0.0.20

07 Jul 03:05
Compare
Choose a tag to compare
0.0.20 Pre-release
Pre-release
  • revert sd_mecha.train_difference to the original implementation from supermerger
  • add 3 new methods add_opposite, clamped_add_opposite and select_max_delta

Note that clamped_add_opposite corresponds to the implementation of train difference in 0.0.19 and earlier versions.

0.0.17

14 Jun 15:54
Compare
Choose a tag to compare
0.0.17 Pre-release
Pre-release
  • rename method clip to clamp to disambiguate it from "CLIP" the text encoder
  • I originally mislabelled the tag as 0.17 and I am too lazy to fix the commit target after creating the new 0.0.17 tag (the github ui doesn't allow me to enter a commit hash) so here is the appropriate commit hash of the release: f8b0800