-
-
Notifications
You must be signed in to change notification settings - Fork 296
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot install PyTorch 1.12.1 as dependency since it has been updated to 2.0 #1891
Comments
Oh… judging from https://download.pytorch.org/whl/torch/, looks I should've installed 3.10 instead? |
My bad. Python 3.10 installs just fine, but then I found out mps raises exception in the program.
Some Googling tells me I might need to update to PyTorch 1.13 instead of 1.12. |
Resolution: Use Python 3.10. |
Successfully eliminated the error with PyTorch 1.13.1 + Torchvision 0.14.1. |
2.0 builds. Unfortunately the one I need (roll) is still not in 2.0.1 as of right now according to https://qqaatw.dev/pytorch-mps-ops-coverage/ 😵 |
If 2.1 is needed, we're planning on updating chaiNNer to use 3.11 and 2.1 when 2.1 releases. The only issue is we'll need to replace the current clipboard dependency which we already have a plan for. Thanks for keeping this ticket updated with your experience using MPS and glad to know it actually works somewhat. Nobody else has ever ended up letting me know if they got it working |
By the way, when I do npm install (using Node LTS 18.15.0) I got the following:
I then used However, I just tried Upscale Face to see if other operations can hopefully work with mps, and was met with this error:
So I think there might be something wrong with my build still and I didn't actually work around the issue by using pnpm. Could you also tell me how to solve the Node dependencies issue correctly? UPDATE: Not an app issue but seems to be related to using external Python. If I switched back to bundled Python it works normally. |
Should be this line: chaiNNer/src/main/backend/process.ts Line 66 in 6fd553e
This is where we spawn the backend process and define its environment variables. Should be able to just add it to it's object.
You should be able to resolve that by using either (or both) --legacy-peer-deps and --force, but I'm not entirely sure. I keep meaning to look into that since new contributors keep running into that issue |
Good news! So I added SwinIR upscaling that used to take 2m7s now finishes in just 37s. Lovely to see all these cores being used.
Will just wait for you to resolve the dependency conflict since pnpm works for me now. However as my update mentioned, the problem seems to be related to using external Python and not the app itself. I get the same aforementioned "Error running nodes" error (where UPSCALE FACE node doesn't get any input??) using the release build as well. The flow is set up like this: ![]() |
I bet this probably is just a pytorch thing from using the CPU fallback ops. Not sure what we'd be able to do about that.
That's a completely different issue from the dependency conflict stuff. As for that, my guess is facexlib isn't detecting a face there. Could you try a different image? If you get the same thing, it might be a pytorch 2 incompatibility or something |
@joeyballentine FYI I tried using PyTorch 1.13.1 + Torchvision 0.14.1 with integrated Python 3.9.11 and got the same error. |
Can you paste the actual error message you get please? |
I switched back to external Python but the error is the same as the one I posted above. Here it is again:
|
Looks like a bug in the frontend this time. I'll look into it. |
Anyway, this issue simply formats an existing error message, so it still won't work even after I fixed the frontend bug. The actual error message is:
Pretty generic. We'll have to see some logs to figure out where the error occurred. We know that the error comes from an Upscale Face node, though. So that's something. |
I'm betting it's just not detecting any faces, in which case we should probably show a better error message if so |
Right now, we literally don't show the error from Upscale Face at all. The error above is internal error reporting stuff that failed (see #1908). |
Hmm I'm not so sure. With external Python 3.10, the error shows with every picture. If I switch back to integrated Python 3.9 and intentionally feed it a picture with no face, it just outputs the original picture without processing or any error. I also cloned GFPGAN (referenced in facexlib page) and run its example script with the Python 3.10 + PyTorch 1.13.1 + Torchvision 0.14.1 I used to process the same picture, and it ran normally. Compiled a new build from main, and here's the output:
|
So there's nothing in renderer.log but when I check main.log, finally some useful info! Looks like it's really mps-related:
|
Found a related discussion here: Mikubill/sd-webui-controlnet#860 |
The error does look MPS related I guess. It's coming directly from facexlib, so it's not a problem with chaiNNer specifically but the facexlib dependency. If using the latest version of facexlib doesn't work, then you'll have to open an issue there. |
Yup I manually changed a file in facexlib and can get past this error. Someone already opened an issue there to request specifying "device" in parameter (currently it only uses cuda and then cpu if not available). For reference: xinntao/facexlib#18 |
Information:
Description
Tried using a SwinIR upscaling model today and found the CPU processing time and RAM usage intolerable. After some Googling I found that "mps" support was only added in PyTorch 1.12, and judging from this I can't get it with the bundled Python, so I installed Python 3.11 via Homebrew and pointed Chainner to it. Surely enough, after restart Chainner was able to find it and list PyTorch 1.12.1 as dependency.
However, when I clicked "Install", the spinner started but the progress bar won't even show. After some minutes of inactivity, I restart Chainner and turned on "Use Pip Directly" to retry, and here's the message:
Visiting the official PyTorch site I can also see the current download defaults to 2.0.1.
Time to update the dependency?
The text was updated successfully, but these errors were encountered: