You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I checked for similar issues and couldn't find any.
My WebUI and Unprompted are both up-to-date.
I disabled my other extensions but the problem persists.
Describe the bug
Using a txt2maskmethod other than clipseg causes a crash. Same thing with zoomenhance. See: #145
Prompt
[after][txt2mask method=clip_surgery]face[/txt2mask][img2img][/after]a person
Log output
*** Error running postprocess: D:\stable-diffusion-webui\extensions\_unprompted\scripts\unprompted.py
Traceback (most recent call last):
File "D:\stable-diffusion-webui\extensions\_unprompted\lib_unprompted\shortcodes.py", line 140, in render
return str(self.handler(self.token.keyword, self.pargs, self.kwargs, context, content))
File "D:\stable-diffusion-webui\extensions\_unprompted\lib_unprompted\shared.py", line 87, in handler
return (self.shortcode_objects[f"{keyword}"].run_block(pargs, kwargs, context, content))
File "D:\stable-diffusion-webui\extensions\_unprompted/shortcodes\stable_diffusion\txt2mask.py", line 595, in run_block
self.image_mask = get_mask().resize((self.init_image.width, self.init_image.height))
File "D:\stable-diffusion-webui\extensions\_unprompted/shortcodes\stable_diffusion\txt2mask.py", line 393, in get_mask
sam = sam_model_registry[model_type](checkpoint=sam_file)
File "D:\stable-diffusion-webui\venv\lib\site-packages\segment_anything\build_sam.py", line 15, in build_sam_vit_h
return _build_sam(
File "D:\stable-diffusion-webui\venv\lib\site-packages\segment_anything\build_sam.py", line 105, in _build_sam
state_dict = torch.load(f)
File "D:\stable-diffusion-webui\modules\safe.py", line 108, in load
return load_with_extra(filename, *args, extra_handler=global_extra_handler, **kwargs)
File "D:\stable-diffusion-webui\modules\safe.py", line 156, in load_with_extra
return unsafe_torch_load(filename, *args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\serialization.py", line 815, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\serialization.py", line 1033, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: A load persistent id instruction was encountered,
but no persistent_load functionwas specified.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\stable-diffusion-webui\modules\scripts.py", line 651, in postprocess
script.postprocess(p, processed, *script_args)
File "D:\stable-diffusion-webui\extensions\_unprompted\scripts\unprompted.py", line 864, in postprocess
processed = Unprompted.after(p, processed)
File "D:\stable-diffusion-webui\extensions\_unprompted\lib_unprompted\shared.py", line 192, in after
val = self.shortcode_objects[i].after(p, processed)
File "D:\stable-diffusion-webui\extensions\_unprompted/shortcodes\basic\after.py", line 98, in after
self.Unprompted.process_string(content, "after")
File "D:\stable-diffusion-webui\extensions\_unprompted\lib_unprompted\shared.py", line 201, in process_string
string = self.shortcode_parser.parse(self.sanitize_pre(string, self.Config.syntax.sanitize_before), context)
File "D:\stable-diffusion-webui\extensions\_unprompted\lib_unprompted\shortcodes.py", line 251, in parse
returnstack.pop().render(context).replace(self.esc_start,"")
File "D:\stable-diffusion-webui\extensions\_unprompted\lib_unprompted\shortcodes.py", line 55, in render
return''.join(child.render(context) forchildin self.children)
File "D:\stable-diffusion-webui\extensions\_unprompted\lib_unprompted\shortcodes.py", line 55, in<genexpr>return''.join(child.render(context) forchildin self.children)
File "D:\stable-diffusion-webui\extensions\_unprompted\lib_unprompted\shortcodes.py", line 144, in render
raise ShortcodeRenderingError(msg) from ex
lib_unprompted.shortcodes.ShortcodeRenderingError: An exception was raised while rendering the 'txt2mask' shortcode in line 1.
Unprompted version
10.1.4
WebUI version
1.6.0
Other comments
My googling of the error seems to suggest either an incompatible file or an incompatible version of torch. I changed the model file to one I've used with the Segment Anything extension successfully, and I get the same error. Updating the torch version breaks the webui with an error stating torch can't access the GPU and causes other dependency errors. I don't think it's a problem on my end, because I don't have problems with other extensions that use sam.
I'm not too knowledgeable on the subject, but is segment anything designed to take a text prompt as input? In the segment anything extension, it uses groundingdino to take a text prompt, and the output from sam just tries to infer the object within the bounding boxes generated by groundingdino. But, sam itself isn't taking text, it only takes bounding boxes and points as input. Maybe that's just how that extension works?
The text was updated successfully, but these errors were encountered:
Could you try running the WebUI with the --disable-safe-unpickle commandline arg? On my device, this is necessary for loading the fastsam model, not sure if it applies to clip_surgery as well.
Do you get the same error when using [txt2mask] outside of the [after] block? e.g. a prompt such as [txt2mask method=clip_surgery]face[/txt2mask]walter white face in img2img inpainting mode
I also released a small patch, v10.1.5, that addresses a few potentially related problems. Feel free to give it a try.
I can confirm that clip_surgery and fastsam methods are working in this version, although I still find their performance quite disappointing compared to clipseg.
Due diligence
Describe the bug
Using a
txt2mask
method
other thanclipseg
causes a crash. Same thing withzoomenhance
. See: #145Prompt
[after][txt2mask method=clip_surgery]face[/txt2mask][img2img][/after]a person
Log output
Unprompted version
10.1.4
WebUI version
1.6.0
Other comments
My googling of the error seems to suggest either an incompatible file or an incompatible version of torch. I changed the model file to one I've used with the Segment Anything extension successfully, and I get the same error. Updating the torch version breaks the webui with an error stating torch can't access the GPU and causes other dependency errors. I don't think it's a problem on my end, because I don't have problems with other extensions that use sam.
I'm not too knowledgeable on the subject, but is segment anything designed to take a text prompt as input? In the segment anything extension, it uses groundingdino to take a text prompt, and the output from sam just tries to infer the object within the bounding boxes generated by groundingdino. But, sam itself isn't taking text, it only takes bounding boxes and points as input. Maybe that's just how that extension works?
The text was updated successfully, but these errors were encountered: