Anyone having success with the LORA implementation? #635
Unanswered
andykaufseo
asked this question in
Q&A
Replies: 2 comments 2 replies
-
What learning rate are you using? should be 1e-4 to 1e-5 or so, not 1-e6 (typical for normal dreambooth). Are you training the text encoder also or just the Unet? |
Beta Was this translation helpful? Give feedback.
0 replies
-
Yeah, what are your parameters? I haven't tested objects, but i did get success with, let's say, states, or actions, and definitely styles. I even tried very small finue-tune with 620 images and it worked like a charm. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I've tested it with this dreambooth extension (didn't test the official implementation) : it does what it says, fast speed (high batch count) and low vram usage (can get as low as 4.7gb). But the quality...
Regular dreambooth needs 100-200 steps per image to get it right (and maybe another 100-200 to get it perfect).
With the LORA thing checked, after 1000 steps per image it's not even close (i can see it's getting there, it has an idea of what's going on, but the results are like with DB after 20-50 steps).
Tried different learning rates, captions / no captions, etc.
Anyone had any notable success with it?
@d8ahazard - everything implemented just like in the original paper? (i'm pretty sure it is, but i just wanted to double check with you).
Beta Was this translation helpful? Give feedback.
All reactions