SD2.1 Base / Dreambooth training with extension #586
Replies: 5 comments 4 replies
-
Just checked the json file, yeah looks like it's flagged as v_prediction after it builds the training image from the source checkpoint. So I'm guessing it doesn't pull information from the yaml file that's included with the source model? |
Beta Was this translation helpful? Give feedback.
-
I've seen this as well. the 512 version gets assumed to be the same as 768, good catch! |
Beta Was this translation helpful? Give feedback.
-
I've noticed that too when i was facing issues generating class images #531, Also i would advise not exceed 0.0000015 learning rates and using as few images as possible ... latest trick for me for character training is to train the closeup photos at a 0.0000015 learning rate and the further away photo as a 0.0000012 learning rate. 100 steps per image |
Beta Was this translation helpful? Give feedback.
-
And somehow seems like you need to creat a V2.1 512 checkpoint from the interface, to download some stuff then you can creat a V2.1 768 checkpoint... |
Beta Was this translation helpful? Give feedback.
-
how are you training with the 2.1 model everytime I try the classification images just come out as blank Edited** Never mind fixed it needed to select xformers as memory allocation as if left blank they came up as black squares |
Beta Was this translation helpful? Give feedback.
-
Hey! I've found when creating a checkpoint from the interface, if the model is the SD2.x 512x the yaml file that is created assumes the SD2.x was the 768-V version.
As a consequence if the SD2.x was the base version without V-Prediction the model won't render correctly and just generates noise. When replacing it with the correct yaml file it starts to render correctly, I guess my question is: when training is the correct settings being used? Or should only the 768-V be the only one trained?
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions