-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failed Inference with GFPGAN #4
Comments
@Neltherion What are your cli flags? One thing you can try is |
I have just released a new update, can you check if this is still a problem on the latest master? |
I re-downloaded the repo, still getting a similar error on GFPGAN: |
@elyxlz did you change anything in the docker-compose file? what is your OS and GPU? |
Linux Ubuntu 20.04 LTS |
@AbdBarho No it's the default installation. Windows 11, NVIDIA 2080ti |
I think I managed to reproduce the problem, this happens when this checkbox is activated, is this correct @Neltherion @elyxlz ? |
@AbdBarho Yes, but it also occurred on the specific GFPGAN tab when I uploaded an image and attempted. |
@elyxlz I managed to pinpoint the problem to this line: https://github.com/hlky/stable-diffusion/blob/main/scripts/webui.py#L851 I will get back to you when its fixed! |
The problem seems to be with the This library defines a global device object. if the machine supports cuda, then the image is moved to the GPU. The library does not accept device as a parameter for any of the functions, which is not ideal. There is even an issue regarding this problem, and it does not seem that it will be fixed soon. I think the only option we have here is to use GFPGAN on the GPU, and RealESRGAN both CPU or GPU.
I have updated the docker compose file to reflect this, and I will keep an eye on future changes of You can pull the most up-to-date state and run it again:
I hope this fixes your problem. |
The author of the WebUI has forked the original library and moved the model to the CPU! More info here: Sygil-Dev/stable-diffusion#130 I think your problem should be now solved. Feel free to reopen if the problem persists. |
Thanks for the detailed explanations. Another unrelated question: If I |
@Neltherion there are two aspect to your question: docker compose: if you run docker image if you just use In summary, use BTW. you can change the commit SHA on your local machine if you want to test stuff and don't wait for me to update the repo. I hope this answers you question. |
Thanks for all the responses so far. So If I'm not mistaken you're changing the hardcoded commit SHA every time you push an update of your repo to stay updated with the latest WebUI version? Thanks again. |
Yes! additionally, I go through all new commits in the WebUI repo to see what has changed and if I need to update the dependencies and / or dockerfile, and test the newest version on my pc. |
I get this error when trying to use GFPGAN:
Failed inference for GFPGAN: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor
.Should some flag be set to avoid using Torch's
FloatTensor
?The text was updated successfully, but these errors were encountered: