Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error(s) in loading state_dict for SanaMS: size mismatch for pos_embed: copying a param with shape torch.Size([1, 4096, 2240]) from checkpoint, the shape in current model is torch.Size([1, 1024, 2240]) #3

Open
Johnz86 opened this issue Jan 8, 2025 · 2 comments

Comments

@Johnz86
Copy link

Johnz86 commented Jan 8, 2025

ComfyUI Error Report

Error Details

  • Node ID: 164
  • Node Type: SanaCheckpointLoader
  • Exception Type: RuntimeError
  • Exception Message: Error(s) in loading state_dict for SanaMS:
    size mismatch for pos_embed: copying a param with shape torch.Size([1, 4096, 2240]) from checkpoint, the shape in current model is torch.Size([1, 1024, 2240]).

Stack Trace

  File "D:\Programovanie\Python\stable-diffusion-web\ComfyUI\execution.py", line 327, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "D:\Programovanie\Python\stable-diffusion-web\ComfyUI\execution.py", line 202, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "D:\Programovanie\Python\stable-diffusion-web\ComfyUI\execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)

  File "D:\Programovanie\Python\stable-diffusion-web\ComfyUI\execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))

  File "D:\Programovanie\Python\stable-diffusion-web\ComfyUI\custom_nodes\ComfyUI_ExtraModels\Sana\nodes.py", line 33, in load_checkpoint
    model = load_sana(

  File "D:\Programovanie\Python\stable-diffusion-web\ComfyUI\custom_nodes\ComfyUI_ExtraModels\Sana\loader.py", line 88, in load_sana
    m, u = model.diffusion_model.load_state_dict(state_dict, strict=False)

  File "D:\Programovanie\Python\stable-diffusion-web\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 2189, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(

System Information

  • ComfyUI Version: v0.3.10-40-gd0f3752e
  • Arguments: .\main.py
  • OS: nt
  • Python Version: 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
  • Embedded Python: false
  • PyTorch Version: 2.3.1+cu121

Devices

  • Name: cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync
    • Type: cuda
    • VRAM Total: 25769148416
    • VRAM Free: 19053229672
    • Torch VRAM Total: 5402263552
    • Torch VRAM Free: 115947112

Logs

2025-01-08T21:28:07.067664 - 
2025-01-08T21:28:07.273773 - �[36;20m[custom_nodes.comfyui_controlnet_aux] | INFO -> Using ckpts path: D:\Programovanie\Python\stable-diffusion-web\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts�[0m
2025-01-08T21:28:07.273773 - �[36;20m[custom_nodes.comfyui_controlnet_aux] | INFO -> Using symlinks: False�[0m
2025-01-08T21:28:07.273773 - �[36;20m[custom_nodes.comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 
2025-01-08T21:28:07.334277 - Cannot import D:\Programovanie\Python\stable-diffusion-web\ComfyUI\custom_nodes\ComfyUI-Anyline module for custom nodes: cannot import name 'TEDDetector' from 'custom_nodes.comfyui_controlnet_aux.src.controlnet_aux.teed' (unknown location)
2025-01-08T21:28:07.335277 - Adding2025-01-08T21:28:07.335277 -  2025-01-08T21:28:07.335277 - D:\Programovanie\Python\stable-diffusion-web\ComfyUI\custom_nodes2025-01-08T21:28:07.335277 -  2025-01-08T21:28:07.335277 - to sys.path2025-01-08T21:28:07.335277 - 
2025-01-08T21:28:07.409279 - 
�[36mEfficiency Nodes:�[0m Attempting to add Control Net options to the 'HiRes-Fix Script' Node (comfyui_controlnet_aux add-on)...�[92mSuccess!�[0m2025-01-08T21:28:07.409279 - 
2025-01-08T21:28:07.413278 - Loaded Efficiency nodes from2025-01-08T21:28:07.413278 -  2025-01-08T21:28:07.413278 - D:\Programovanie\Python\stable-diffusion-web\ComfyUI\custom_nodes\efficiency-nodes-comfyui2025-01-08T21:28:07.413278 - 
2025-01-08T21:28:07.418278 - Loaded ControlNetPreprocessors nodes from2025-01-08T21:28:07.418278 -  2025-01-08T21:28:07.419278 - D:\Programovanie\Python\stable-diffusion-web\ComfyUI\custom_nodes\comfyui_controlnet_aux2025-01-08T21:28:07.419278 - 
2025-01-08T21:28:07.420279 - Loaded AdvancedControlNet nodes from2025-01-08T21:28:07.420279 -  2025-01-08T21:28:07.420279 - D:\Programovanie\Python\stable-diffusion-web\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet2025-01-08T21:28:07.420279 - 
2025-01-08T21:28:07.426782 - Could not find AnimateDiff nodes2025-01-08T21:28:07.426782 - 
2025-01-08T21:28:07.438782 - Loaded IPAdapter nodes from2025-01-08T21:28:07.438782 -  2025-01-08T21:28:07.438782 - D:\Programovanie\Python\stable-diffusion-web\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus2025-01-08T21:28:07.438782 - 
2025-01-08T21:28:07.582791 - Loaded VideoHelperSuite from2025-01-08T21:28:07.582791 -  2025-01-08T21:28:07.582791 - D:\Programovanie\Python\stable-diffusion-web\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite2025-01-08T21:28:07.582791 - 
2025-01-08T21:28:07.589792 - ### Loading: ComfyUI-Impact-Pack (V8.2)2025-01-08T21:28:07.589792 - 
2025-01-08T21:28:07.647832 - ### Loading: ComfyUI-Impact-Pack (V8.2)2025-01-08T21:28:07.647832 - 
2025-01-08T21:28:07.669832 - Loaded ImpactPack nodes from2025-01-08T21:28:07.669832 -  2025-01-08T21:28:07.669832 - D:\Programovanie\Python\stable-diffusion-web\ComfyUI\custom_nodes\ComfyUI-Impact-Pack2025-01-08T21:28:07.669832 - 
2025-01-08T21:28:07.670832 - [Impact Pack] Wildcards loading done.2025-01-08T21:28:07.670832 - 
2025-01-08T21:28:07.670832 - [Impact Pack] Wildcards loading done.2025-01-08T21:28:07.670832 - 
2025-01-08T21:28:07.922884 - [Crystools �[0;32mINFO�[0m] Crystools version: 1.21.0
2025-01-08T21:28:07.949388 - [Crystools �[0;32mINFO�[0m] CPU: 12th Gen Intel(R) Core(TM) i9-12900K - Arch: AMD64 - OS: Windows 10
2025-01-08T21:28:07.958388 - [Crystools �[0;32mINFO�[0m] Pynvml (Nvidia) initialized.
2025-01-08T21:28:07.958388 - [Crystools �[0;32mINFO�[0m] GPU/s:
2025-01-08T21:28:07.971388 - [Crystools �[0;32mINFO�[0m] 0) NVIDIA GeForce RTX 3090
2025-01-08T21:28:07.972401 - [Crystools �[0;32mINFO�[0m] NVIDIA Driver: 566.36
2025-01-08T21:28:08.204117 - �[33mModule 'diffusers' load failed. If you don't have it installed, do it:�[0m2025-01-08T21:28:08.204117 - 
2025-01-08T21:28:08.204117 - �[33mpip install diffusers�[0m2025-01-08T21:28:08.204117 - 
2025-01-08T21:28:08.673641 - �[34m[ComfyUI-Easy-Use] server: �[0mv1.2.6 �[92mLoaded�[0m2025-01-08T21:28:08.673641 - 
2025-01-08T21:28:08.673641 - �[34m[ComfyUI-Easy-Use] web root: �[0mD:\Programovanie\Python\stable-diffusion-web\ComfyUI\custom_nodes\ComfyUI-Easy-Use\web_version/v1 �[92mLoaded�[0m2025-01-08T21:28:08.674641 - 
2025-01-08T21:28:08.746146 - ### Loading: ComfyUI-Impact-Pack (V8.2)2025-01-08T21:28:08.746146 - 
2025-01-08T21:28:08.747147 - [Impact Pack] Wildcards loading done.2025-01-08T21:28:08.747147 - 
2025-01-08T21:28:08.754587 - ### Loading: ComfyUI-Impact-Subpack (V1.2.6)
2025-01-08T21:28:08.961998 - [Impact Subpack] ultralytics_bbox: D:\Programovanie\Python\stable-diffusion-web\ComfyUI\models\ultralytics\bbox
2025-01-08T21:28:08.961998 - [Impact Subpack] ultralytics_segm: D:\Programovanie\Python\stable-diffusion-web\ComfyUI\models\ultralytics\segm
2025-01-08T21:28:08.962998 - ### Loading: ComfyUI-Inspire-Pack (V1.9.1)2025-01-08T21:28:08.962998 - 
2025-01-08T21:28:09.033503 - Total VRAM 24575 MB, total RAM 65346 MB
2025-01-08T21:28:09.033503 - pytorch version: 2.3.1+cu121
2025-01-08T21:28:09.033503 - xformers version: 0.0.26.post1
2025-01-08T21:28:09.034503 - Set vram state to: NORMAL_VRAM
2025-01-08T21:28:09.034503 - Device: cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync
2025-01-08T21:28:09.068502 - �[94mtheUpsiders Logic Nodes: �[92mLoaded�[0m2025-01-08T21:28:09.068502 - 
2025-01-08T21:28:09.117178 - ### Loading: ComfyUI-Manager (V3.3.13)
2025-01-08T21:28:09.240190 - ### ComfyUI Version: v0.3.10-40-gd0f3752e | Released on '2025-01-07'
2025-01-08T21:28:09.718642 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
2025-01-08T21:28:09.720642 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
2025-01-08T21:28:09.750145 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
2025-01-08T21:28:09.802145 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
2025-01-08T21:28:09.936124 - MNeMiC Nodes: 2025-01-08T21:28:09.936124 - Loaded2025-01-08T21:28:09.936124 - 
2025-01-08T21:28:09.938124 - ------------------------------------------2025-01-08T21:28:09.938124 - 
2025-01-08T21:28:09.964124 - ### N-Suite Revision:2025-01-08T21:28:09.964124 -  2025-01-08T21:28:09.964124 - ae7cc848 2025-01-08T21:28:09.964124 - 
2025-01-08T21:28:09.965124 - Current version of packaging: 23.12025-01-08T21:28:09.965124 - 
2025-01-08T21:28:09.965124 - Version of cpuinfo: Not found2025-01-08T21:28:09.965124 - 
2025-01-08T21:28:09.965124 - Current version of git: 3.1.312025-01-08T21:28:09.965124 - 
2025-01-08T21:28:09.968123 - Current version of moviepy: 1.0.32025-01-08T21:28:09.968123 - 
2025-01-08T21:28:09.968123 - Current version of cv2: 4.10.02025-01-08T21:28:09.968123 - 
2025-01-08T21:28:10.074139 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
2025-01-08T21:28:10.144217 - Current version of skbuild: 0.17.62025-01-08T21:28:10.144217 - 
2025-01-08T21:28:10.145217 - Version of typing: Not found2025-01-08T21:28:10.145217 - 
2025-01-08T21:28:10.158216 - Current version of diskcache: 5.6.32025-01-08T21:28:10.158216 - 
2025-01-08T21:28:10.186469 - Current version of llama_cpp: 0.2.26+cu1212025-01-08T21:28:10.186469 - 
2025-01-08T21:28:10.458188 - Current version of timm: 0.9.122025-01-08T21:28:10.459188 - 
2025-01-08T21:28:11.267193 - [ReActor]2025-01-08T21:28:11.267193 -  - 2025-01-08T21:28:11.267193 - STATUS2025-01-08T21:28:11.267193 -  - 2025-01-08T21:28:11.267193 - Running v0.5.2-b1 in ComfyUI2025-01-08T21:28:11.267193 - 
2025-01-08T21:28:11.387699 - Torch version: 2.3.1+cu1212025-01-08T21:28:11.387699 - 
2025-01-08T21:28:12.303318 - (pysssss:WD14Tagger) [DEBUG] Available ORT providers: AzureExecutionProvider, CPUExecutionProvider2025-01-08T21:28:12.303318 - 
2025-01-08T21:28:12.303318 - (pysssss:WD14Tagger) [DEBUG] Using ORT providers: CUDAExecutionProvider, CPUExecutionProvider2025-01-08T21:28:12.303318 - 
2025-01-08T21:28:12.357473 - ------------------------------------------2025-01-08T21:28:12.357473 - 
2025-01-08T21:28:12.357473 - Comfyroll Studio v1.76 : 2025-01-08T21:28:12.357473 -  175 Nodes Loaded2025-01-08T21:28:12.357473 - 
2025-01-08T21:28:12.357473 - ------------------------------------------2025-01-08T21:28:12.357473 - 
2025-01-08T21:28:12.357473 - ** For changes, please see patch notes at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/blob/main/Patch_Notes.md2025-01-08T21:28:12.357473 - 
2025-01-08T21:28:12.357473 - ** For help, please see the wiki at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/wiki2025-01-08T21:28:12.357473 - 
2025-01-08T21:28:12.357473 - ------------------------------------------2025-01-08T21:28:12.357473 - 
2025-01-08T21:28:12.438644 - FizzleDorf Custom Nodes: 2025-01-08T21:28:12.438644 - Loaded2025-01-08T21:28:12.438644 - 
2025-01-08T21:28:12.522646 - [tinyterraNodes] 2025-01-08T21:28:12.522646 - Loaded2025-01-08T21:28:12.522646 - 
2025-01-08T21:28:13.168654 - [comfy_mtb] | INFO -> loaded 2025-01-08T21:28:13.168654 - 962025-01-08T21:28:13.168654 -  nodes successfuly2025-01-08T21:28:13.168654 - 
2025-01-08T21:28:13.289147 - 
2025-01-08T21:28:13.289147 - [rgthree-comfy] Loaded 42 epic nodes. 🎉2025-01-08T21:28:13.290147 - 
2025-01-08T21:28:13.290147 - 
2025-01-08T21:28:13.330654 - WAS Node Suite: 2025-01-08T21:28:13.331654 - BlenderNeko's Advanced CLIP Text Encode found, attempting to enable `CLIPTextEncode` support.2025-01-08T21:28:13.331654 - 
2025-01-08T21:28:13.331654 - WAS Node Suite: 2025-01-08T21:28:13.331654 - `CLIPTextEncode (BlenderNeko Advanced + NSP)` node enabled under `WAS Suite/Conditioning` menu.2025-01-08T21:28:13.331654 - 
2025-01-08T21:28:14.079181 - WAS Node Suite: 2025-01-08T21:28:14.079181 - OpenCV Python FFMPEG support is enabled2025-01-08T21:28:14.079181 - 
2025-01-08T21:28:14.829853 - WAS Node Suite: 2025-01-08T21:28:14.829853 - Finished.2025-01-08T21:28:14.829853 -  2025-01-08T21:28:14.829853 - Loaded2025-01-08T21:28:14.829853 -  2025-01-08T21:28:14.829853 - 2212025-01-08T21:28:14.829853 -  2025-01-08T21:28:14.829853 - nodes successfully.2025-01-08T21:28:14.829853 - 
2025-01-08T21:28:14.829853 - 
	2025-01-08T21:28:14.829853 - "Your time is now. Start where you are and never stop."2025-01-08T21:28:14.829853 -  - Roy T. Bennett2025-01-08T21:28:14.829853 - 
2025-01-08T21:28:14.830852 - 
2025-01-08T21:28:14.865085 - 
2025-01-08T21:28:14.890085 - Starting server

2025-01-08T21:28:14.891084 - To see the GUI go to: http://127.0.0.1:8188
2025-01-08T21:28:17.547122 - Start Log Catchers...2025-01-08T21:28:17.549121 - 
2025-01-08T21:28:17.550121 - [LogConsole] client [f255fe04f639494f8b661249b42c9dbd], console [1109810c-d81b-465f-bf9c-b7742bf5e574], connected2025-01-08T21:28:17.550121 - 
2025-01-08T21:28:25.259748 - got prompt
2025-01-08T21:28:26.204961 - Missing VAE keys2025-01-08T21:28:26.204961 -  2025-01-08T21:28:26.205961 - ['encoder.project_in.weight', 'encoder.project_in.bias', 'encoder.stages.0.0.conv1.conv.weight', 'encoder.stages.0.0.conv1.conv.bias', 'encoder.stages.0.0.conv2.conv.weight', 'encoder.stages.0.0.conv2.norm.weight', 'encoder.stages.0.0.conv2.norm.bias', 'encoder.stages.0.1.conv1.conv.weight', 'encoder.stages.0.1.conv1.conv.bias', 'encoder.stages.0.1.conv2.conv.weight', 'encoder.stages.0.1.conv2.norm.weight', 'encoder.stages.0.1.conv2.norm.bias', 'encoder.stages.0.2.main.weight', 'encoder.stages.0.2.main.bias', 'encoder.stages.1.0.conv1.conv.weight', 'encoder.stages.1.0.conv1.conv.bias', 'encoder.stages.1.0.conv2.conv.weight', 'encoder.stages.1.0.conv2.norm.weight', 'encoder.stages.1.0.conv2.norm.bias', 'encoder.stages.1.1.conv1.conv.weight', 'encoder.stages.1.1.conv1.conv.bias', 'encoder.stages.1.1.conv2.conv.weight', 'encoder.stages.1.1.conv2.norm.weight', 'encoder.stages.1.1.conv2.norm.bias', 'encoder.stages.1.2.main.weight', 'encoder.stages.1.2.main.bias', 'encoder.stages.2.0.conv1.conv.weight', 'encoder.stages.2.0.conv1.conv.bias', 'encoder.stages.2.0.conv2.conv.weight', 'encoder.stages.2.0.conv2.norm.weight', 'encoder.stages.2.0.conv2.norm.bias', 'encoder.stages.2.1.conv1.conv.weight', 'encoder.stages.2.1.conv1.conv.bias', 'encoder.stages.2.1.conv2.conv.weight', 'encoder.stages.2.1.conv2.norm.weight', 'encoder.stages.2.1.conv2.norm.bias', 'encoder.stages.2.2.main.weight', 'encoder.stages.2.2.main.bias', 'encoder.stages.3.0.context_module.qkv.0.weight', 'encoder.stages.3.0.context_module.aggreg.0.0.weight', 'encoder.stages.3.0.context_module.aggreg.0.1.weight', 'encoder.stages.3.0.context_module.proj.0.weight', 'encoder.stages.3.0.context_module.proj.1.weight', 'encoder.stages.3.0.context_module.proj.1.bias', 'encoder.stages.3.0.local_module.inverted_conv.conv.weight', 'encoder.stages.3.0.local_module.inverted_conv.conv.bias', 'encoder.stages.3.0.local_module.depth_conv.conv.weight', 'encoder.stages.3.0.local_module.depth_conv.conv.bias', 'encoder.stages.3.0.local_module.point_conv.conv.weight', 'encoder.stages.3.0.local_module.point_conv.norm.weight', 'encoder.stages.3.0.local_module.point_conv.norm.bias', 'encoder.stages.3.1.context_module.qkv.0.weight', 'encoder.stages.3.1.context_module.aggreg.0.0.weight', 'encoder.stages.3.1.context_module.aggreg.0.1.weight', 'encoder.stages.3.1.context_module.proj.0.weight', 'encoder.stages.3.1.context_module.proj.1.weight', 'encoder.stages.3.1.context_module.proj.1.bias', 'encoder.stages.3.1.local_module.inverted_conv.conv.weight', 'encoder.stages.3.1.local_module.inverted_conv.conv.bias', 'encoder.stages.3.1.local_module.depth_conv.conv.weight', 'encoder.stages.3.1.local_module.depth_conv.conv.bias', 'encoder.stages.3.1.local_module.point_conv.conv.weight', 'encoder.stages.3.1.local_module.point_conv.norm.weight', 'encoder.stages.3.1.local_module.point_conv.norm.bias', 'encoder.stages.3.2.context_module.qkv.0.weight', 'encoder.stages.3.2.context_module.aggreg.0.0.weight', 'encoder.stages.3.2.context_module.aggreg.0.1.weight', 'encoder.stages.3.2.context_module.proj.0.weight', 'encoder.stages.3.2.context_module.proj.1.weight', 'encoder.stages.3.2.context_module.proj.1.bias', 'encoder.stages.3.2.local_module.inverted_conv.conv.weight', 'encoder.stages.3.2.local_module.inverted_conv.conv.bias', 'encoder.stages.3.2.local_module.depth_conv.conv.weight', 'encoder.stages.3.2.local_module.depth_conv.conv.bias', 'encoder.stages.3.2.local_module.point_conv.conv.weight', 'encoder.stages.3.2.local_module.point_conv.norm.weight', 'encoder.stages.3.2.local_module.point_conv.norm.bias', 'encoder.stages.3.3.main.weight', 'encoder.stages.3.3.main.bias', 'encoder.stages.4.0.context_module.qkv.0.weight', 'encoder.stages.4.0.context_module.aggreg.0.0.weight', 'encoder.stages.4.0.context_module.aggreg.0.1.weight', 'encoder.stages.4.0.context_module.proj.0.weight', 'encoder.stages.4.0.context_module.proj.1.weight', 'encoder.stages.4.0.context_module.proj.1.bias', 'encoder.stages.4.0.local_module.inverted_conv.conv.weight', 'encoder.stages.4.0.local_module.inverted_conv.conv.bias', 'encoder.stages.4.0.local_module.depth_conv.conv.weight', 'encoder.stages.4.0.local_module.depth_conv.conv.bias', 'encoder.stages.4.0.local_module.point_conv.conv.weight', 'encoder.stages.4.0.local_module.point_conv.norm.weight', 'encoder.stages.4.0.local_module.point_conv.norm.bias', 'encoder.stages.4.1.context_module.qkv.0.weight', 'encoder.stages.4.1.context_module.aggreg.0.0.weight', 'encoder.stages.4.1.context_module.aggreg.0.1.weight', 'encoder.stages.4.1.context_module.proj.0.weight', 'encoder.stages.4.1.context_module.proj.1.weight', 'encoder.stages.4.1.context_module.proj.1.bias', 'encoder.stages.4.1.local_module.inverted_conv.conv.weight', 'encoder.stages.4.1.local_module.inverted_conv.conv.bias', 'encoder.stages.4.1.local_module.depth_conv.conv.weight', 'encoder.stages.4.1.local_module.depth_conv.conv.bias', 'encoder.stages.4.1.local_module.point_conv.conv.weight', 'encoder.stages.4.1.local_module.point_conv.norm.weight', 'encoder.stages.4.1.local_module.point_conv.norm.bias', 'encoder.stages.4.2.context_module.qkv.0.weight', 'encoder.stages.4.2.context_module.aggreg.0.0.weight', 'encoder.stages.4.2.context_module.aggreg.0.1.weight', 'encoder.stages.4.2.context_module.proj.0.weight', 'encoder.stages.4.2.context_module.proj.1.weight', 'encoder.stages.4.2.context_module.proj.1.bias', 'encoder.stages.4.2.local_module.inverted_conv.conv.weight', 'encoder.stages.4.2.local_module.inverted_conv.conv.bias', 'encoder.stages.4.2.local_module.depth_conv.conv.weight', 'encoder.stages.4.2.local_module.depth_conv.conv.bias', 'encoder.stages.4.2.local_module.point_conv.conv.weight', 'encoder.stages.4.2.local_module.point_conv.norm.weight', 'encoder.stages.4.2.local_module.point_conv.norm.bias', 'encoder.stages.4.3.main.weight', 'encoder.stages.4.3.main.bias', 'encoder.stages.5.0.context_module.qkv.0.weight', 'encoder.stages.5.0.context_module.aggreg.0.0.weight', 'encoder.stages.5.0.context_module.aggreg.0.1.weight', 'encoder.stages.5.0.context_module.proj.0.weight', 'encoder.stages.5.0.context_module.proj.1.weight', 'encoder.stages.5.0.context_module.proj.1.bias', 'encoder.stages.5.0.local_module.inverted_conv.conv.weight', 'encoder.stages.5.0.local_module.inverted_conv.conv.bias', 'encoder.stages.5.0.local_module.depth_conv.conv.weight', 'encoder.stages.5.0.local_module.depth_conv.conv.bias', 'encoder.stages.5.0.local_module.point_conv.conv.weight', 'encoder.stages.5.0.local_module.point_conv.norm.weight', 'encoder.stages.5.0.local_module.point_conv.norm.bias', 'encoder.stages.5.1.context_module.qkv.0.weight', 'encoder.stages.5.1.context_module.aggreg.0.0.weight', 'encoder.stages.5.1.context_module.aggreg.0.1.weight', 'encoder.stages.5.1.context_module.proj.0.weight', 'encoder.stages.5.1.context_module.proj.1.weight', 'encoder.stages.5.1.context_module.proj.1.bias', 'encoder.stages.5.1.local_module.inverted_conv.conv.weight', 'encoder.stages.5.1.local_module.inverted_conv.conv.bias', 'encoder.stages.5.1.local_module.depth_conv.conv.weight', 'encoder.stages.5.1.local_module.depth_conv.conv.bias', 'encoder.stages.5.1.local_module.point_conv.conv.weight', 'encoder.stages.5.1.local_module.point_conv.norm.weight', 'encoder.stages.5.1.local_module.point_conv.norm.bias', 'encoder.stages.5.2.context_module.qkv.0.weight', 'encoder.stages.5.2.context_module.aggreg.0.0.weight', 'encoder.stages.5.2.context_module.aggreg.0.1.weight', 'encoder.stages.5.2.context_module.proj.0.weight', 'encoder.stages.5.2.context_module.proj.1.weight', 'encoder.stages.5.2.context_module.proj.1.bias', 'encoder.stages.5.2.local_module.inverted_conv.conv.weight', 'encoder.stages.5.2.local_module.inverted_conv.conv.bias', 'encoder.stages.5.2.local_module.depth_conv.conv.weight', 'encoder.stages.5.2.local_module.depth_conv.conv.bias', 'encoder.stages.5.2.local_module.point_conv.conv.weight', 'encoder.stages.5.2.local_module.point_conv.norm.weight', 'encoder.stages.5.2.local_module.point_conv.norm.bias', 'encoder.project_out.main.0.conv.weight', 'encoder.project_out.main.0.conv.bias', 'decoder.project_in.main.conv.weight', 'decoder.project_in.main.conv.bias', 'decoder.stages.0.0.main.conv.weight', 'decoder.stages.0.0.main.conv.bias', 'decoder.stages.0.1.conv1.conv.weight', 'decoder.stages.0.1.conv1.conv.bias', 'decoder.stages.0.1.conv2.conv.weight', 'decoder.stages.0.1.conv2.norm.weight', 'decoder.stages.0.1.conv2.norm.bias', 'decoder.stages.0.2.conv1.conv.weight', 'decoder.stages.0.2.conv1.conv.bias', 'decoder.stages.0.2.conv2.conv.weight', 'decoder.stages.0.2.conv2.norm.weight', 'decoder.stages.0.2.conv2.norm.bias', 'decoder.stages.0.3.conv1.conv.weight', 'decoder.stages.0.3.conv1.conv.bias', 'decoder.stages.0.3.conv2.conv.weight', 'decoder.stages.0.3.conv2.norm.weight', 'decoder.stages.0.3.conv2.norm.bias', 'decoder.stages.1.0.main.conv.weight', 'decoder.stages.1.0.main.conv.bias', 'decoder.stages.1.1.conv1.conv.weight', 'decoder.stages.1.1.conv1.conv.bias', 'decoder.stages.1.1.conv2.conv.weight', 'decoder.stages.1.1.conv2.norm.weight', 'decoder.stages.1.1.conv2.norm.bias', 'decoder.stages.1.2.conv1.conv.weight', 'decoder.stages.1.2.conv1.conv.bias', 'decoder.stages.1.2.conv2.conv.weight', 'decoder.stages.1.2.conv2.norm.weight', 'decoder.stages.1.2.conv2.norm.bias', 'decoder.stages.1.3.conv1.conv.weight', 'decoder.stages.1.3.conv1.conv.bias', 'decoder.stages.1.3.conv2.conv.weight', 'decoder.stages.1.3.conv2.norm.weight', 'decoder.stages.1.3.conv2.norm.bias', 'decoder.stages.2.0.main.conv.weight', 'decoder.stages.2.0.main.conv.bias', 'decoder.stages.2.1.conv1.conv.weight', 'decoder.stages.2.1.conv1.conv.bias', 'decoder.stages.2.1.conv2.conv.weight', 'decoder.stages.2.1.conv2.norm.weight', 'decoder.stages.2.1.conv2.norm.bias', 'decoder.stages.2.2.conv1.conv.weight', 'decoder.stages.2.2.conv1.conv.bias', 'decoder.stages.2.2.conv2.conv.weight', 'decoder.stages.2.2.conv2.norm.weight', 'decoder.stages.2.2.conv2.norm.bias', 'decoder.stages.2.3.conv1.conv.weight', 'decoder.stages.2.3.conv1.conv.bias', 'decoder.stages.2.3.conv2.conv.weight', 'decoder.stages.2.3.conv2.norm.weight', 'decoder.stages.2.3.conv2.norm.bias', 'decoder.stages.3.0.main.conv.weight', 'decoder.stages.3.0.main.conv.bias', 'decoder.stages.3.1.context_module.qkv.0.weight', 'decoder.stages.3.1.context_module.aggreg.0.0.weight', 'decoder.stages.3.1.context_module.aggreg.0.1.weight', 'decoder.stages.3.1.context_module.proj.0.weight', 'decoder.stages.3.1.context_module.proj.1.weight', 'decoder.stages.3.1.context_module.proj.1.bias', 'decoder.stages.3.1.local_module.inverted_conv.conv.weight', 'decoder.stages.3.1.local_module.inverted_conv.conv.bias', 'decoder.stages.3.1.local_module.depth_conv.conv.weight', 'decoder.stages.3.1.local_module.depth_conv.conv.bias', 'decoder.stages.3.1.local_module.point_conv.conv.weight', 'decoder.stages.3.1.local_module.point_conv.norm.weight', 'decoder.stages.3.1.local_module.point_conv.norm.bias', 'decoder.stages.3.2.context_module.qkv.0.weight', 'decoder.stages.3.2.context_module.aggreg.0.0.weight', 'decoder.stages.3.2.context_module.aggreg.0.1.weight', 'decoder.stages.3.2.context_module.proj.0.weight', 'decoder.stages.3.2.context_module.proj.1.weight', 'decoder.stages.3.2.context_module.proj.1.bias', 'decoder.stages.3.2.local_module.inverted_conv.conv.weight', 'decoder.stages.3.2.local_module.inverted_conv.conv.bias', 'decoder.stages.3.2.local_module.depth_conv.conv.weight', 'decoder.stages.3.2.local_module.depth_conv.conv.bias', 'decoder.stages.3.2.local_module.point_conv.conv.weight', 'decoder.stages.3.2.local_module.point_conv.norm.weight', 'decoder.stages.3.2.local_module.point_conv.norm.bias', 'decoder.stages.3.3.context_module.qkv.0.weight', 'decoder.stages.3.3.context_module.aggreg.0.0.weight', 'decoder.stages.3.3.context_module.aggreg.0.1.weight', 'decoder.stages.3.3.context_module.proj.0.weight', 'decoder.stages.3.3.context_module.proj.1.weight', 'decoder.stages.3.3.context_module.proj.1.bias', 'decoder.stages.3.3.local_module.inverted_conv.conv.weight', 'decoder.stages.3.3.local_module.inverted_conv.conv.bias', 'decoder.stages.3.3.local_module.depth_conv.conv.weight', 'decoder.stages.3.3.local_module.depth_conv.conv.bias', 'decoder.stages.3.3.local_module.point_conv.conv.weight', 'decoder.stages.3.3.local_module.point_conv.norm.weight', 'decoder.stages.3.3.local_module.point_conv.norm.bias', 'decoder.stages.4.0.main.conv.weight', 'decoder.stages.4.0.main.conv.bias', 'decoder.stages.4.1.context_module.qkv.0.weight', 'decoder.stages.4.1.context_module.aggreg.0.0.weight', 'decoder.stages.4.1.context_module.aggreg.0.1.weight', 'decoder.stages.4.1.context_module.proj.0.weight', 'decoder.stages.4.1.context_module.proj.1.weight', 'decoder.stages.4.1.context_module.proj.1.bias', 'decoder.stages.4.1.local_module.inverted_conv.conv.weight', 'decoder.stages.4.1.local_module.inverted_conv.conv.bias', 'decoder.stages.4.1.local_module.depth_conv.conv.weight', 'decoder.stages.4.1.local_module.depth_conv.conv.bias', 'decoder.stages.4.1.local_module.point_conv.conv.weight', 'decoder.stages.4.1.local_module.point_conv.norm.weight', 'decoder.stages.4.1.local_module.point_conv.norm.bias', 'decoder.stages.4.2.context_module.qkv.0.weight', 'decoder.stages.4.2.context_module.aggreg.0.0.weight', 'decoder.stages.4.2.context_module.aggreg.0.1.weight', 'decoder.stages.4.2.context_module.proj.0.weight', 'decoder.stages.4.2.context_module.proj.1.weight', 'decoder.stages.4.2.context_module.proj.1.bias', 'decoder.stages.4.2.local_module.inverted_conv.conv.weight', 'decoder.stages.4.2.local_module.inverted_conv.conv.bias', 'decoder.stages.4.2.local_module.depth_conv.conv.weight', 'decoder.stages.4.2.local_module.depth_conv.conv.bias', 'decoder.stages.4.2.local_module.point_conv.conv.weight', 'decoder.stages.4.2.local_module.point_conv.norm.weight', 'decoder.stages.4.2.local_module.point_conv.norm.bias', 'decoder.stages.4.3.context_module.qkv.0.weight', 'decoder.stages.4.3.context_module.aggreg.0.0.weight', 'decoder.stages.4.3.context_module.aggreg.0.1.weight', 'decoder.stages.4.3.context_module.proj.0.weight', 'decoder.stages.4.3.context_module.proj.1.weight', 'decoder.stages.4.3.context_module.proj.1.bias', 'decoder.stages.4.3.local_module.inverted_conv.conv.weight', 'decoder.stages.4.3.local_module.inverted_conv.conv.bias', 'decoder.stages.4.3.local_module.depth_conv.conv.weight', 'decoder.stages.4.3.local_module.depth_conv.conv.bias', 'decoder.stages.4.3.local_module.point_conv.conv.weight', 'decoder.stages.4.3.local_module.point_conv.norm.weight', 'decoder.stages.4.3.local_module.point_conv.norm.bias', 'decoder.stages.5.0.context_module.qkv.0.weight', 'decoder.stages.5.0.context_module.aggreg.0.0.weight', 'decoder.stages.5.0.context_module.aggreg.0.1.weight', 'decoder.stages.5.0.context_module.proj.0.weight', 'decoder.stages.5.0.context_module.proj.1.weight', 'decoder.stages.5.0.context_module.proj.1.bias', 'decoder.stages.5.0.local_module.inverted_conv.conv.weight', 'decoder.stages.5.0.local_module.inverted_conv.conv.bias', 'decoder.stages.5.0.local_module.depth_conv.conv.weight', 'decoder.stages.5.0.local_module.depth_conv.conv.bias', 'decoder.stages.5.0.local_module.point_conv.conv.weight', 'decoder.stages.5.0.local_module.point_conv.norm.weight', 'decoder.stages.5.0.local_module.point_conv.norm.bias', 'decoder.stages.5.1.context_module.qkv.0.weight', 'decoder.stages.5.1.context_module.aggreg.0.0.weight', 'decoder.stages.5.1.context_module.aggreg.0.1.weight', 'decoder.stages.5.1.context_module.proj.0.weight', 'decoder.stages.5.1.context_module.proj.1.weight', 'decoder.stages.5.1.context_module.proj.1.bias', 'decoder.stages.5.1.local_module.inverted_conv.conv.weight', 'decoder.stages.5.1.local_module.inverted_conv.conv.bias', 'decoder.stages.5.1.local_module.depth_conv.conv.weight', 'decoder.stages.5.1.local_module.depth_conv.conv.bias', 'decoder.stages.5.1.local_module.point_conv.conv.weight', 'decoder.stages.5.1.local_module.point_conv.norm.weight', 'decoder.stages.5.1.local_module.point_conv.norm.bias', 'decoder.stages.5.2.context_module.qkv.0.weight', 'decoder.stages.5.2.context_module.aggreg.0.0.weight', 'decoder.stages.5.2.context_module.aggreg.0.1.weight', 'decoder.stages.5.2.context_module.proj.0.weight', 'decoder.stages.5.2.context_module.proj.1.weight', 'decoder.stages.5.2.context_module.proj.1.bias', 'decoder.stages.5.2.local_module.inverted_conv.conv.weight', 'decoder.stages.5.2.local_module.inverted_conv.conv.bias', 'decoder.stages.5.2.local_module.depth_conv.conv.weight', 'decoder.stages.5.2.local_module.depth_conv.conv.bias', 'decoder.stages.5.2.local_module.point_conv.conv.weight', 'decoder.stages.5.2.local_module.point_conv.norm.weight', 'decoder.stages.5.2.local_module.point_conv.norm.bias', 'decoder.project_out.0.weight', 'decoder.project_out.0.bias', 'decoder.project_out.2.conv.weight', 'decoder.project_out.2.conv.bias']2025-01-08T21:28:26.208961 - 
2025-01-08T21:28:26.208961 - Leftover VAE keys2025-01-08T21:28:26.208961 -  2025-01-08T21:28:26.209961 - ['encoder.conv_in.bias', 'encoder.conv_in.weight', 'encoder.conv_out.bias', 'encoder.conv_out.weight', 'encoder.down_blocks.0.0.conv1.bias', 'encoder.down_blocks.0.0.conv1.weight', 'encoder.down_blocks.0.0.conv2.weight', 'encoder.down_blocks.0.0.norm.bias', 'encoder.down_blocks.0.0.norm.weight', 'encoder.down_blocks.0.1.conv1.bias', 'encoder.down_blocks.0.1.conv1.weight', 'encoder.down_blocks.0.1.conv2.weight', 'encoder.down_blocks.0.1.norm.bias', 'encoder.down_blocks.0.1.norm.weight', 'encoder.down_blocks.0.2.conv.bias', 'encoder.down_blocks.0.2.conv.weight', 'encoder.down_blocks.1.0.conv1.bias', 'encoder.down_blocks.1.0.conv1.weight', 'encoder.down_blocks.1.0.conv2.weight', 'encoder.down_blocks.1.0.norm.bias', 'encoder.down_blocks.1.0.norm.weight', 'encoder.down_blocks.1.1.conv1.bias', 'encoder.down_blocks.1.1.conv1.weight', 'encoder.down_blocks.1.1.conv2.weight', 'encoder.down_blocks.1.1.norm.bias', 'encoder.down_blocks.1.1.norm.weight', 'encoder.down_blocks.1.2.conv.bias', 'encoder.down_blocks.1.2.conv.weight', 'encoder.down_blocks.2.0.conv1.bias', 'encoder.down_blocks.2.0.conv1.weight', 'encoder.down_blocks.2.0.conv2.weight', 'encoder.down_blocks.2.0.norm.bias', 'encoder.down_blocks.2.0.norm.weight', 'encoder.down_blocks.2.1.conv1.bias', 'encoder.down_blocks.2.1.conv1.weight', 'encoder.down_blocks.2.1.conv2.weight', 'encoder.down_blocks.2.1.norm.bias', 'encoder.down_blocks.2.1.norm.weight', 'encoder.down_blocks.2.2.conv.bias', 'encoder.down_blocks.2.2.conv.weight', 'encoder.down_blocks.3.0.attn.norm_out.bias', 'encoder.down_blocks.3.0.attn.norm_out.weight', 'encoder.down_blocks.3.0.attn.to_k.weight', 'encoder.down_blocks.3.0.attn.to_out.weight', 'encoder.down_blocks.3.0.attn.to_q.weight', 'encoder.down_blocks.3.0.attn.to_qkv_multiscale.0.proj_in.weight', 'encoder.down_blocks.3.0.attn.to_qkv_multiscale.0.proj_out.weight', 'encoder.down_blocks.3.0.attn.to_v.weight', 'encoder.down_blocks.3.0.conv_out.conv_depth.bias', 'encoder.down_blocks.3.0.conv_out.conv_depth.weight', 'encoder.down_blocks.3.0.conv_out.conv_inverted.bias', 'encoder.down_blocks.3.0.conv_out.conv_inverted.weight', 'encoder.down_blocks.3.0.conv_out.conv_point.weight', 'encoder.down_blocks.3.0.conv_out.norm.bias', 'encoder.down_blocks.3.0.conv_out.norm.weight', 'encoder.down_blocks.3.1.attn.norm_out.bias', 'encoder.down_blocks.3.1.attn.norm_out.weight', 'encoder.down_blocks.3.1.attn.to_k.weight', 'encoder.down_blocks.3.1.attn.to_out.weight', 'encoder.down_blocks.3.1.attn.to_q.weight', 'encoder.down_blocks.3.1.attn.to_qkv_multiscale.0.proj_in.weight', 'encoder.down_blocks.3.1.attn.to_qkv_multiscale.0.proj_out.weight', 'encoder.down_blocks.3.1.attn.to_v.weight', 'encoder.down_blocks.3.1.conv_out.conv_depth.bias', 'encoder.down_blocks.3.1.conv_out.conv_depth.weight', 'encoder.down_blocks.3.1.conv_out.conv_inverted.bias', 'encoder.down_blocks.3.1.conv_out.conv_inverted.weight', 'encoder.down_blocks.3.1.conv_out.conv_point.weight', 'encoder.down_blocks.3.1.conv_out.norm.bias', 'encoder.down_blocks.3.1.conv_out.norm.weight', 'encoder.down_blocks.3.2.attn.norm_out.bias', 'encoder.down_blocks.3.2.attn.norm_out.weight', 'encoder.down_blocks.3.2.attn.to_k.weight', 'encoder.down_blocks.3.2.attn.to_out.weight', 'encoder.down_blocks.3.2.attn.to_q.weight', 'encoder.down_blocks.3.2.attn.to_qkv_multiscale.0.proj_in.weight', 'encoder.down_blocks.3.2.attn.to_qkv_multiscale.0.proj_out.weight', 'encoder.down_blocks.3.2.attn.to_v.weight', 'encoder.down_blocks.3.2.conv_out.conv_depth.bias', 'encoder.down_blocks.3.2.conv_out.conv_depth.weight', 'encoder.down_blocks.3.2.conv_out.conv_inverted.bias', 'encoder.down_blocks.3.2.conv_out.conv_inverted.weight', 'encoder.down_blocks.3.2.conv_out.conv_point.weight', 'encoder.down_blocks.3.2.conv_out.norm.bias', 'encoder.down_blocks.3.2.conv_out.norm.weight', 'encoder.down_blocks.3.3.conv.bias', 'encoder.down_blocks.3.3.conv.weight', 'encoder.down_blocks.4.0.attn.norm_out.bias', 'encoder.down_blocks.4.0.attn.norm_out.weight', 'encoder.down_blocks.4.0.attn.to_k.weight', 'encoder.down_blocks.4.0.attn.to_out.weight', 'encoder.down_blocks.4.0.attn.to_q.weight', 'encoder.down_blocks.4.0.attn.to_qkv_multiscale.0.proj_in.weight', 'encoder.down_blocks.4.0.attn.to_qkv_multiscale.0.proj_out.weight', 'encoder.down_blocks.4.0.attn.to_v.weight', 'encoder.down_blocks.4.0.conv_out.conv_depth.bias', 'encoder.down_blocks.4.0.conv_out.conv_depth.weight', 'encoder.down_blocks.4.0.conv_out.conv_inverted.bias', 'encoder.down_blocks.4.0.conv_out.conv_inverted.weight', 'encoder.down_blocks.4.0.conv_out.conv_point.weight', 'encoder.down_blocks.4.0.conv_out.norm.bias', 'encoder.down_blocks.4.0.conv_out.norm.weight', 'encoder.down_blocks.4.1.attn.norm_out.bias', 'encoder.down_blocks.4.1.attn.norm_out.weight', 'encoder.down_blocks.4.1.attn.to_k.weight', 'encoder.down_blocks.4.1.attn.to_out.weight', 'encoder.down_blocks.4.1.attn.to_q.weight', 'encoder.down_blocks.4.1.attn.to_qkv_multiscale.0.proj_in.weight', 'encoder.down_blocks.4.1.attn.to_qkv_multiscale.0.proj_out.weight', 'encoder.down_blocks.4.1.attn.to_v.weight', 'encoder.down_blocks.4.1.conv_out.conv_depth.bias', 'encoder.down_blocks.4.1.conv_out.conv_depth.weight', 'encoder.down_blocks.4.1.conv_out.conv_inverted.bias', 'encoder.down_blocks.4.1.conv_out.conv_inverted.weight', 'encoder.down_blocks.4.1.conv_out.conv_point.weight', 'encoder.down_blocks.4.1.conv_out.norm.bias', 'encoder.down_blocks.4.1.conv_out.norm.weight', 'encoder.down_blocks.4.2.attn.norm_out.bias', 'encoder.down_blocks.4.2.attn.norm_out.weight', 'encoder.down_blocks.4.2.attn.to_k.weight', 'encoder.down_blocks.4.2.attn.to_out.weight', 'encoder.down_blocks.4.2.attn.to_q.weight', 'encoder.down_blocks.4.2.attn.to_qkv_multiscale.0.proj_in.weight', 'encoder.down_blocks.4.2.attn.to_qkv_multiscale.0.proj_out.weight', 'encoder.down_blocks.4.2.attn.to_v.weight', 'encoder.down_blocks.4.2.conv_out.conv_depth.bias', 'encoder.down_blocks.4.2.conv_out.conv_depth.weight', 'encoder.down_blocks.4.2.conv_out.conv_inverted.bias', 'encoder.down_blocks.4.2.conv_out.conv_inverted.weight', 'encoder.down_blocks.4.2.conv_out.conv_point.weight', 'encoder.down_blocks.4.2.conv_out.norm.bias', 'encoder.down_blocks.4.2.conv_out.norm.weight', 'encoder.down_blocks.4.3.conv.bias', 'encoder.down_blocks.4.3.conv.weight', 'encoder.down_blocks.5.0.attn.norm_out.bias', 'encoder.down_blocks.5.0.attn.norm_out.weight', 'encoder.down_blocks.5.0.attn.to_k.weight', 'encoder.down_blocks.5.0.attn.to_out.weight', 'encoder.down_blocks.5.0.attn.to_q.weight', 'encoder.down_blocks.5.0.attn.to_qkv_multiscale.0.proj_in.weight', 'encoder.down_blocks.5.0.attn.to_qkv_multiscale.0.proj_out.weight', 'encoder.down_blocks.5.0.attn.to_v.weight', 'encoder.down_blocks.5.0.conv_out.conv_depth.bias', 'encoder.down_blocks.5.0.conv_out.conv_depth.weight', 'encoder.down_blocks.5.0.conv_out.conv_inverted.bias', 'encoder.down_blocks.5.0.conv_out.conv_inverted.weight', 'encoder.down_blocks.5.0.conv_out.conv_point.weight', 'encoder.down_blocks.5.0.conv_out.norm.bias', 'encoder.down_blocks.5.0.conv_out.norm.weight', 'encoder.down_blocks.5.1.attn.norm_out.bias', 'encoder.down_blocks.5.1.attn.norm_out.weight', 'encoder.down_blocks.5.1.attn.to_k.weight', 'encoder.down_blocks.5.1.attn.to_out.weight', 'encoder.down_blocks.5.1.attn.to_q.weight', 'encoder.down_blocks.5.1.attn.to_qkv_multiscale.0.proj_in.weight', 'encoder.down_blocks.5.1.attn.to_qkv_multiscale.0.proj_out.weight', 'encoder.down_blocks.5.1.attn.to_v.weight', 'encoder.down_blocks.5.1.conv_out.conv_depth.bias', 'encoder.down_blocks.5.1.conv_out.conv_depth.weight', 'encoder.down_blocks.5.1.conv_out.conv_inverted.bias', 'encoder.down_blocks.5.1.conv_out.conv_inverted.weight', 'encoder.down_blocks.5.1.conv_out.conv_point.weight', 'encoder.down_blocks.5.1.conv_out.norm.bias', 'encoder.down_blocks.5.1.conv_out.norm.weight', 'encoder.down_blocks.5.2.attn.norm_out.bias', 'encoder.down_blocks.5.2.attn.norm_out.weight', 'encoder.down_blocks.5.2.attn.to_k.weight', 'encoder.down_blocks.5.2.attn.to_out.weight', 'encoder.down_blocks.5.2.attn.to_q.weight', 'encoder.down_blocks.5.2.attn.to_qkv_multiscale.0.proj_in.weight', 'encoder.down_blocks.5.2.attn.to_qkv_multiscale.0.proj_out.weight', 'encoder.down_blocks.5.2.attn.to_v.weight', 'encoder.down_blocks.5.2.conv_out.conv_depth.bias', 'encoder.down_blocks.5.2.conv_out.conv_depth.weight', 'encoder.down_blocks.5.2.conv_out.conv_inverted.bias', 'encoder.down_blocks.5.2.conv_out.conv_inverted.weight', 'encoder.down_blocks.5.2.conv_out.conv_point.weight', 'encoder.down_blocks.5.2.conv_out.norm.bias', 'encoder.down_blocks.5.2.conv_out.norm.weight', 'decoder.conv_in.bias', 'decoder.conv_in.weight', 'decoder.conv_out.bias', 'decoder.conv_out.weight', 'decoder.norm_out.bias', 'decoder.norm_out.weight', 'decoder.up_blocks.0.0.conv.bias', 'decoder.up_blocks.0.0.conv.weight', 'decoder.up_blocks.0.1.conv1.bias', 'decoder.up_blocks.0.1.conv1.weight', 'decoder.up_blocks.0.1.conv2.weight', 'decoder.up_blocks.0.1.norm.bias', 'decoder.up_blocks.0.1.norm.weight', 'decoder.up_blocks.0.2.conv1.bias', 'decoder.up_blocks.0.2.conv1.weight', 'decoder.up_blocks.0.2.conv2.weight', 'decoder.up_blocks.0.2.norm.bias', 'decoder.up_blocks.0.2.norm.weight', 'decoder.up_blocks.0.3.conv1.bias', 'decoder.up_blocks.0.3.conv1.weight', 'decoder.up_blocks.0.3.conv2.weight', 'decoder.up_blocks.0.3.norm.bias', 'decoder.up_blocks.0.3.norm.weight', 'decoder.up_blocks.1.0.conv.bias', 'decoder.up_blocks.1.0.conv.weight', 'decoder.up_blocks.1.1.conv1.bias', 'decoder.up_blocks.1.1.conv1.weight', 'decoder.up_blocks.1.1.conv2.weight', 'decoder.up_blocks.1.1.norm.bias', 'decoder.up_blocks.1.1.norm.weight', 'decoder.up_blocks.1.2.conv1.bias', 'decoder.up_blocks.1.2.conv1.weight', 'decoder.up_blocks.1.2.conv2.weight', 'decoder.up_blocks.1.2.norm.bias', 'decoder.up_blocks.1.2.norm.weight', 'decoder.up_blocks.1.3.conv1.bias', 'decoder.up_blocks.1.3.conv1.weight', 'decoder.up_blocks.1.3.conv2.weight', 'decoder.up_blocks.1.3.norm.bias', 'decoder.up_blocks.1.3.norm.weight', 'decoder.up_blocks.2.0.conv.bias', 'decoder.up_blocks.2.0.conv.weight', 'decoder.up_blocks.2.1.conv1.bias', 'decoder.up_blocks.2.1.conv1.weight', 'decoder.up_blocks.2.1.conv2.weight', 'decoder.up_blocks.2.1.norm.bias', 'decoder.up_blocks.2.1.norm.weight', 'decoder.up_blocks.2.2.conv1.bias', 'decoder.up_blocks.2.2.conv1.weight', 'decoder.up_blocks.2.2.conv2.weight', 'decoder.up_blocks.2.2.norm.bias', 'decoder.up_blocks.2.2.norm.weight', 'decoder.up_blocks.2.3.conv1.bias', 'decoder.up_blocks.2.3.conv1.weight', 'decoder.up_blocks.2.3.conv2.weight', 'decoder.up_blocks.2.3.norm.bias', 'decoder.up_blocks.2.3.norm.weight', 'decoder.up_blocks.3.0.conv.bias', 'decoder.up_blocks.3.0.conv.weight', 'decoder.up_blocks.3.1.attn.norm_out.bias', 'decoder.up_blocks.3.1.attn.norm_out.weight', 'decoder.up_blocks.3.1.attn.to_k.weight', 'decoder.up_blocks.3.1.attn.to_out.weight', 'decoder.up_blocks.3.1.attn.to_q.weight', 'decoder.up_blocks.3.1.attn.to_qkv_multiscale.0.proj_in.weight', 'decoder.up_blocks.3.1.attn.to_qkv_multiscale.0.proj_out.weight', 'decoder.up_blocks.3.1.attn.to_v.weight', 'decoder.up_blocks.3.1.conv_out.conv_depth.bias', 'decoder.up_blocks.3.1.conv_out.conv_depth.weight', 'decoder.up_blocks.3.1.conv_out.conv_inverted.bias', 'decoder.up_blocks.3.1.conv_out.conv_inverted.weight', 'decoder.up_blocks.3.1.conv_out.conv_point.weight', 'decoder.up_blocks.3.1.conv_out.norm.bias', 'decoder.up_blocks.3.1.conv_out.norm.weight', 'decoder.up_blocks.3.2.attn.norm_out.bias', 'decoder.up_blocks.3.2.attn.norm_out.weight', 'decoder.up_blocks.3.2.attn.to_k.weight', 'decoder.up_blocks.3.2.attn.to_out.weight', 'decoder.up_blocks.3.2.attn.to_q.weight', 'decoder.up_blocks.3.2.attn.to_qkv_multiscale.0.proj_in.weight', 'decoder.up_blocks.3.2.attn.to_qkv_multiscale.0.proj_out.weight', 'decoder.up_blocks.3.2.attn.to_v.weight', 'decoder.up_blocks.3.2.conv_out.conv_depth.bias', 'decoder.up_blocks.3.2.conv_out.conv_depth.weight', 'decoder.up_blocks.3.2.conv_out.conv_inverted.bias', 'decoder.up_blocks.3.2.conv_out.conv_inverted.weight', 'decoder.up_blocks.3.2.conv_out.conv_point.weight', 'decoder.up_blocks.3.2.conv_out.norm.bias', 'decoder.up_blocks.3.2.conv_out.norm.weight', 'decoder.up_blocks.3.3.attn.norm_out.bias', 'decoder.up_blocks.3.3.attn.norm_out.weight', 'decoder.up_blocks.3.3.attn.to_k.weight', 'decoder.up_blocks.3.3.attn.to_out.weight', 'decoder.up_blocks.3.3.attn.to_q.weight', 'decoder.up_blocks.3.3.attn.to_qkv_multiscale.0.proj_in.weight', 'decoder.up_blocks.3.3.attn.to_qkv_multiscale.0.proj_out.weight', 'decoder.up_blocks.3.3.attn.to_v.weight', 'decoder.up_blocks.3.3.conv_out.conv_depth.bias', 'decoder.up_blocks.3.3.conv_out.conv_depth.weight', 'decoder.up_blocks.3.3.conv_out.conv_inverted.bias', 'decoder.up_blocks.3.3.conv_out.conv_inverted.weight', 'decoder.up_blocks.3.3.conv_out.conv_point.weight', 'decoder.up_blocks.3.3.conv_out.norm.bias', 'decoder.up_blocks.3.3.conv_out.norm.weight', 'decoder.up_blocks.4.0.conv.bias', 'decoder.up_blocks.4.0.conv.weight', 'decoder.up_blocks.4.1.attn.norm_out.bias', 'decoder.up_blocks.4.1.attn.norm_out.weight', 'decoder.up_blocks.4.1.attn.to_k.weight', 'decoder.up_blocks.4.1.attn.to_out.weight', 'decoder.up_blocks.4.1.attn.to_q.weight', 'decoder.up_blocks.4.1.attn.to_qkv_multiscale.0.proj_in.weight', 'decoder.up_blocks.4.1.attn.to_qkv_multiscale.0.proj_out.weight', 'decoder.up_blocks.4.1.attn.to_v.weight', 'decoder.up_blocks.4.1.conv_out.conv_depth.bias', 'decoder.up_blocks.4.1.conv_out.conv_depth.weight', 'decoder.up_blocks.4.1.conv_out.conv_inverted.bias', 'decoder.up_blocks.4.1.conv_out.conv_inverted.weight', 'decoder.up_blocks.4.1.conv_out.conv_point.weight', 'decoder.up_blocks.4.1.conv_out.norm.bias', 'decoder.up_blocks.4.1.conv_out.norm.weight', 'decoder.up_blocks.4.2.attn.norm_out.bias', 'decoder.up_blocks.4.2.attn.norm_out.weight', 'decoder.up_blocks.4.2.attn.to_k.weight', 'decoder.up_blocks.4.2.attn.to_out.weight', 'decoder.up_blocks.4.2.attn.to_q.weight', 'decoder.up_blocks.4.2.attn.to_qkv_multiscale.0.proj_in.weight', 'decoder.up_blocks.4.2.attn.to_qkv_multiscale.0.proj_out.weight', 'decoder.up_blocks.4.2.attn.to_v.weight', 'decoder.up_blocks.4.2.conv_out.conv_depth.bias', 'decoder.up_blocks.4.2.conv_out.conv_depth.weight', 'decoder.up_blocks.4.2.conv_out.conv_inverted.bias', 'decoder.up_blocks.4.2.conv_out.conv_inverted.weight', 'decoder.up_blocks.4.2.conv_out.conv_point.weight', 'decoder.up_blocks.4.2.conv_out.norm.bias', 'decoder.up_blocks.4.2.conv_out.norm.weight', 'decoder.up_blocks.4.3.attn.norm_out.bias', 'decoder.up_blocks.4.3.attn.norm_out.weight', 'decoder.up_blocks.4.3.attn.to_k.weight', 'decoder.up_blocks.4.3.attn.to_out.weight', 'decoder.up_blocks.4.3.attn.to_q.weight', 'decoder.up_blocks.4.3.attn.to_qkv_multiscale.0.proj_in.weight', 'decoder.up_blocks.4.3.attn.to_qkv_multiscale.0.proj_out.weight', 'decoder.up_blocks.4.3.attn.to_v.weight', 'decoder.up_blocks.4.3.conv_out.conv_depth.bias', 'decoder.up_blocks.4.3.conv_out.conv_depth.weight', 'decoder.up_blocks.4.3.conv_out.conv_inverted.bias', 'decoder.up_blocks.4.3.conv_out.conv_inverted.weight', 'decoder.up_blocks.4.3.conv_out.conv_point.weight', 'decoder.up_blocks.4.3.conv_out.norm.bias', 'decoder.up_blocks.4.3.conv_out.norm.weight', 'decoder.up_blocks.5.0.attn.norm_out.bias', 'decoder.up_blocks.5.0.attn.norm_out.weight', 'decoder.up_blocks.5.0.attn.to_k.weight', 'decoder.up_blocks.5.0.attn.to_out.weight', 'decoder.up_blocks.5.0.attn.to_q.weight', 'decoder.up_blocks.5.0.attn.to_qkv_multiscale.0.proj_in.weight', 'decoder.up_blocks.5.0.attn.to_qkv_multiscale.0.proj_out.weight', 'decoder.up_blocks.5.0.attn.to_v.weight', 'decoder.up_blocks.5.0.conv_out.conv_depth.bias', 'decoder.up_blocks.5.0.conv_out.conv_depth.weight', 'decoder.up_blocks.5.0.conv_out.conv_inverted.bias', 'decoder.up_blocks.5.0.conv_out.conv_inverted.weight', 'decoder.up_blocks.5.0.conv_out.conv_point.weight', 'decoder.up_blocks.5.0.conv_out.norm.bias', 'decoder.up_blocks.5.0.conv_out.norm.weight', 'decoder.up_blocks.5.1.attn.norm_out.bias', 'decoder.up_blocks.5.1.attn.norm_out.weight', 'decoder.up_blocks.5.1.attn.to_k.weight', 'decoder.up_blocks.5.1.attn.to_out.weight', 'decoder.up_blocks.5.1.attn.to_q.weight', 'decoder.up_blocks.5.1.attn.to_qkv_multiscale.0.proj_in.weight', 'decoder.up_blocks.5.1.attn.to_qkv_multiscale.0.proj_out.weight', 'decoder.up_blocks.5.1.attn.to_v.weight', 'decoder.up_blocks.5.1.conv_out.conv_depth.bias', 'decoder.up_blocks.5.1.conv_out.conv_depth.weight', 'decoder.up_blocks.5.1.conv_out.conv_inverted.bias', 'decoder.up_blocks.5.1.conv_out.conv_inverted.weight', 'decoder.up_blocks.5.1.conv_out.conv_point.weight', 'decoder.up_blocks.5.1.conv_out.norm.bias', 'decoder.up_blocks.5.1.conv_out.norm.weight', 'decoder.up_blocks.5.2.attn.norm_out.bias', 'decoder.up_blocks.5.2.attn.norm_out.weight', 'decoder.up_blocks.5.2.attn.to_k.weight', 'decoder.up_blocks.5.2.attn.to_out.weight', 'decoder.up_blocks.5.2.attn.to_q.weight', 'decoder.up_blocks.5.2.attn.to_qkv_multiscale.0.proj_in.weight', 'decoder.up_blocks.5.2.attn.to_qkv_multiscale.0.proj_out.weight', 'decoder.up_blocks.5.2.attn.to_v.weight', 'decoder.up_blocks.5.2.conv_out.conv_depth.bias', 'decoder.up_blocks.5.2.conv_out.conv_depth.weight', 'decoder.up_blocks.5.2.conv_out.conv_inverted.bias', 'decoder.up_blocks.5.2.conv_out.conv_inverted.weight', 'decoder.up_blocks.5.2.conv_out.conv_point.weight', 'decoder.up_blocks.5.2.conv_out.norm.bias', 'decoder.up_blocks.5.2.conv_out.norm.weight']2025-01-08T21:28:26.214964 - 
2025-01-08T21:28:27.026007 - end_vram - start_vram: 21300120 - 21300120 = 02025-01-08T21:28:27.026007 - 
2025-01-08T21:28:27.026298 - #171 [ExtraVAELoader]: 1.76s - vram 0b2025-01-08T21:28:27.026298 - 
2025-01-08T21:28:27.027301 - end_vram - start_vram: 21300120 - 21300120 = 02025-01-08T21:28:27.027301 - 
2025-01-08T21:28:27.027301 - #165 [SanaResolutionSelect]: 0.00s - vram 0b2025-01-08T21:28:27.027301 - 
2025-01-08T21:28:27.028300 - end_vram - start_vram: 21300120 - 21300120 = 02025-01-08T21:28:27.028300 - 
2025-01-08T21:28:27.029301 - #176 [EmptySanaLatentImage]: 0.00s - vram 0b2025-01-08T21:28:27.029301 - 
2025-01-08T21:28:27.961736 - 
Fetching 12 files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 12/12 [00:00<00:00, 6001.15it/s]2025-01-08T21:28:27.961736 - 
2025-01-08T21:28:29.607719 - 
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00,  3.98it/s]2025-01-08T21:28:29.607719 - 
2025-01-08T21:28:32.278846 - end_vram - start_vram: 5249997208 - 21300120 = 52286970882025-01-08T21:28:32.279846 - 
2025-01-08T21:28:32.279846 - #169 [GemmaLoader]: 5.25s - vram 5228697088b2025-01-08T21:28:32.280846 - 
2025-01-08T21:28:32.746956 - end_vram - start_vram: 5376037342 - 5249997208 = 1260401342025-01-08T21:28:32.746956 - 
2025-01-08T21:28:32.747957 - #168 [SanaTextEncode]: 0.47s - vram 126040134b2025-01-08T21:28:32.747957 - 
2025-01-08T21:28:32.844984 - end_vram - start_vram: 5377419742 - 5284934040 = 924857022025-01-08T21:28:32.844984 - 
2025-01-08T21:28:32.846984 - #167 [SanaTextEncode]: 0.10s - vram 92485702b2025-01-08T21:28:32.846984 - 
2025-01-08T21:28:32.849984 - [Impact Subpack] Your torch version is outdated, and security features cannot be applied properly.
2025-01-08T21:28:35.246075 - model_type FLOW
2025-01-08T21:28:54.465010 - !!! Exception during processing !!! Error(s) in loading state_dict for SanaMS:
	size mismatch for pos_embed: copying a param with shape torch.Size([1, 4096, 2240]) from checkpoint, the shape in current model is torch.Size([1, 1024, 2240]).
2025-01-08T21:28:54.467910 - Traceback (most recent call last):
  File "D:\Programovanie\Python\stable-diffusion-web\ComfyUI\execution.py", line 327, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "D:\Programovanie\Python\stable-diffusion-web\ComfyUI\execution.py", line 202, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "D:\Programovanie\Python\stable-diffusion-web\ComfyUI\execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
  File "D:\Programovanie\Python\stable-diffusion-web\ComfyUI\execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "D:\Programovanie\Python\stable-diffusion-web\ComfyUI\custom_nodes\ComfyUI_ExtraModels\Sana\nodes.py", line 33, in load_checkpoint
    model = load_sana(
  File "D:\Programovanie\Python\stable-diffusion-web\ComfyUI\custom_nodes\ComfyUI_ExtraModels\Sana\loader.py", line 88, in load_sana
    m, u = model.diffusion_model.load_state_dict(state_dict, strict=False)
  File "D:\Programovanie\Python\stable-diffusion-web\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 2189, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for SanaMS:
	size mismatch for pos_embed: copying a param with shape torch.Size([1, 4096, 2240]) from checkpoint, the shape in current model is torch.Size([1, 1024, 2240]).

2025-01-08T21:28:54.470795 - end_vram - start_vram: 5286316440 - 5286316440 = 02025-01-08T21:28:54.470795 - 
2025-01-08T21:28:54.472750 - #164 [SanaCheckpointLoader]: 21.62s - vram 0b2025-01-08T21:28:54.472750 - 
2025-01-08T21:28:54.473750 - Prompt executed in 29.21 seconds

Attached Workflow

Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

{"last_node_id":176,"last_link_id":317,"nodes":[{"id":168,"type":"SanaTextEncode","pos":[200,460],"size":[340,90],"flags":{},"order":5,"mode":0,"inputs":[{"name":"GEMMA","type":"GEMMA","link":307}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[304],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"SanaTextEncode"},"widgets_values":["photo, depth of field"]},{"id":65,"type":"VAEDecode","pos":[1000,80],"size":[200,50],"flags":{},"order":8,"mode":0,"inputs":[{"name":"samples","type":"LATENT","link":292},{"name":"vae","type":"VAE","link":308}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[313],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"VAEDecode"},"widgets_values":[]},{"id":155,"type":"KSampler","pos":[600,30],"size":[300,480],"flags":{},"order":7,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":300},{"name":"positive","type":"CONDITIONING","link":305},{"name":"negative","type":"CONDITIONING","link":304},{"name":"latent_image","type":"LATENT","link":317,"slot_index":3}],"outputs":[{"name":"LATENT","type":"LATENT","links":[292],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"KSampler"},"widgets_values":[20,"fixed",20,4,"euler","normal",1]},{"id":66,"type":"SaveImage","pos":[1000,170],"size":[610,490],"flags":{},"order":9,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":313}],"outputs":[],"properties":{"Node name for S&R":"SaveImage"},"widgets_values":["ComfyUI_Sana"]},{"id":167,"type":"SanaTextEncode","pos":[200,290],"size":[340,130],"flags":{},"order":4,"mode":0,"inputs":[{"name":"GEMMA","type":"GEMMA","link":306,"slot_index":0}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[305],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"SanaTextEncode"},"widgets_values":["pixelart drawing of a tank with a blue camo pattern"]},{"id":176,"type":"EmptySanaLatentImage","pos":[237.799072265625,136.52313232421875],"size":[210,80],"flags":{},"order":6,"mode":0,"inputs":[{"name":"width","type":"INT","link":315,"widget":{"name":"width"}},{"name":"height","type":"INT","link":316,"widget":{"name":"height"}}],"outputs":[{"name":"LATENT","type":"LATENT","links":[317],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"EmptySanaLatentImage"},"widgets_values":[512,512,1]},{"id":169,"type":"GemmaLoader","pos":[-187.1352081298828,288.4422302246094],"size":[335.4521179199219,110.77887725830078],"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"GEMMA","type":"GEMMA","links":[306,307],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"GemmaLoader"},"widgets_values":["Efficient-Large-Model/gemma-2-2b-it","cuda","BF16"]},{"id":165,"type":"SanaResolutionSelect","pos":[-177.9899444580078,133.77259826660156],"size":[280,102],"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"width","type":"INT","links":[315],"slot_index":0,"shape":3},{"name":"height","type":"INT","links":[316],"slot_index":1,"shape":3}],"properties":{"Node name for S&R":"SanaResolutionSelect"},"widgets_values":["1024px","1.00"]},{"id":164,"type":"SanaCheckpointLoader","pos":[-185.7548065185547,-53.31159210205078],"size":[320,82],"flags":{},"order":2,"mode":0,"inputs":[],"outputs":[{"name":"model","type":"MODEL","links":[300],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"SanaCheckpointLoader"},"widgets_values":["SANA\\Sana_1600M_2Kpx_BF16.pth","SanaMS_1600M_P1_D20"]},{"id":171,"type":"ExtraVAELoader","pos":[544.422607421875,578.8182373046875],"size":[422.05926513671875,106.10562133789062],"flags":{},"order":3,"mode":0,"inputs":[],"outputs":[{"name":"VAE","type":"VAE","links":[308],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"ExtraVAELoader"},"widgets_values":["mit-han-lab\\dc-ae-f32c32-sana-1.0-diffusers.safetensors","dcae-f32c32-sana-1.0","BF16"]}],"links":[[292,155,0,65,0,"LATENT"],[300,164,0,155,0,"MODEL"],[304,168,0,155,2,"CONDITIONING"],[305,167,0,155,1,"CONDITIONING"],[306,169,0,167,0,"GEMMA"],[307,169,0,168,0,"GEMMA"],[308,171,0,65,1,"VAE"],[313,65,0,66,0,"IMAGE"],[315,165,0,176,0,"INT"],[316,165,1,176,1,"INT"],[317,176,0,155,3,"LATENT"]],"groups":[],"config":{},"extra":{"ds":{"scale":0.8769226950000905,"offset":[1077.7490957436844,230.60726550576078]},"ue_links":[]},"version":0.4}

Additional Context

Used model checkpoint: https://huggingface.co/Efficient-Large-Model/Sana_1600M_2Kpx_BF16
Used text_encoders: https://huggingface.co/Efficient-Large-Model/gemma-2-2b-it
Used vae:https://huggingface.co/mit-han-lab/dc-ae-f32c32-sana-1.0-diffusers

@night-rocker
Copy link

Same error for me trying to use Sana 2K model in Comfyui. The Sana Checkpoint Loader Node is not showing "SanaMS_1600M_P1_D20_2K" in Model list, only the same options of 1600M_P1_D20 and 600M_P1_D28. Everything is up to date. Any help appreciated.

@gamexy
Copy link

gamexy commented Jan 19, 2025

Same error for me trying to use Sana 2K model in Comfyui. The Sana Checkpoint Loader Node is not showing "SanaMS_1600M_P1_D20_2K" in Model list, only the same options of 1600M_P1_D20 and 600M_P1_D28. Everything is up to date. Any help appreciated.

same problem

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants