Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed Inference with GFPGAN #4

Closed
Neltherion opened this issue Aug 29, 2022 · 15 comments
Closed

Failed Inference with GFPGAN #4

Neltherion opened this issue Aug 29, 2022 · 15 comments
Labels
bug Something isn't working

Comments

@Neltherion
Copy link

I get this error when trying to use GFPGAN:

Failed inference for GFPGAN: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor.

Should some flag be set to avoid using Torch's FloatTensor?

@AbdBarho
Copy link
Owner

AbdBarho commented Aug 29, 2022

@Neltherion What are your cli flags?

One thing you can try is --extra-models-cpu?

@AbdBarho
Copy link
Owner

I have just released a new update, can you check if this is still a problem on the latest master?

@AbdBarho AbdBarho added the bug Something isn't working label Aug 30, 2022
@elyxlz
Copy link

elyxlz commented Aug 30, 2022

I re-downloaded the repo, still getting a similar error on GFPGAN:
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same

@AbdBarho
Copy link
Owner

@elyxlz did you change anything in the docker-compose file? what is your OS and GPU?

@Neltherion
Copy link
Author

Linux Ubuntu 20.04 LTS
NVIDIA 3080 RTX

@elyxlz
Copy link

elyxlz commented Aug 30, 2022

@AbdBarho No it's the default installation. Windows 11, NVIDIA 2080ti

@AbdBarho
Copy link
Owner

I think I managed to reproduce the problem, this happens when this checkbox is activated, is this correct @Neltherion @elyxlz ?

Screenshot 2022-08-30 175526

@elyxlz
Copy link

elyxlz commented Aug 30, 2022

@AbdBarho Yes, but it also occurred on the specific GFPGAN tab when I uploaded an image and attempted.

@AbdBarho
Copy link
Owner

@elyxlz I managed to pinpoint the problem to this line:

https://github.com/hlky/stable-diffusion/blob/main/scripts/webui.py#L851

I will get back to you when its fixed!

@AbdBarho
Copy link
Owner

The problem seems to be with the facexlib library that is used to detect faces within GFPGAN.

This library defines a global device object. if the machine supports cuda, then the image is moved to the GPU. The library does not accept device as a parameter for any of the functions, which is not ideal. There is even an issue regarding this problem, and it does not seem that it will be fixed soon.

I think the only option we have here is to use GFPGAN on the GPU, and RealESRGAN both CPU or GPU.

--optimized or --optimized-turbo parameter should remain unless you have more than 6GB GPU.

I have updated the docker compose file to reflect this, and I will keep an eye on future changes of facexlib.

You can pull the most up-to-date state and run it again:

git pull
docker compose up --build

I hope this fixes your problem.

@AbdBarho AbdBarho added the awaiting-response Waiting for the issuer to respond label Aug 30, 2022
@AbdBarho
Copy link
Owner

The author of the WebUI has forked the original library and moved the model to the CPU!

More info here: Sygil-Dev/stable-diffusion#130

I think your problem should be now solved. Feel free to reopen if the problem persists.

@AbdBarho AbdBarho removed the awaiting-response Waiting for the issuer to respond label Aug 30, 2022
@Neltherion
Copy link
Author

Thanks for the detailed explanations.

Another unrelated question: If I pull your repo from Git again and then try to build the docker image, does the new build procedure also get the latest WebUI version or does it use the cached version?

@AbdBarho
Copy link
Owner

AbdBarho commented Aug 31, 2022

@Neltherion there are two aspect to your question:

docker compose:

if you run docker compose up then it will use the old cached image and you will not get the latest changes, however with docker compose up --build, docker will rebuild the image (or at least only the stuff that has changed) and so you will get the latest version.

docker image
docker compose knows that the image needs to be rebuilt by comparing the layers, you can roughly think of it this way: if the text in the dockerfile has changed, then rebuild.

if you just use git clone ... in the dockerfile, docker has no way of knowing that the remote repo in github has changed, this is why I hardcode the commit SHA into the dockerfile, so it always changes with each update.

In summary, use docker compose up --build, and you will get the version of the WebUI that is hardcoded into the dockerfile.

BTW. you can change the commit SHA on your local machine if you want to test stuff and don't wait for me to update the repo.

I hope this answers you question.

@Neltherion
Copy link
Author

Thanks for all the responses so far.

So If I'm not mistaken you're changing the hardcoded commit SHA every time you push an update of your repo to stay updated with the latest WebUI version?

Thanks again.

@AbdBarho
Copy link
Owner

Yes! additionally, I go through all new commits in the WebUI repo to see what has changed and if I need to update the dependencies and / or dockerfile, and test the newest version on my pc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants