You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
i use Ollama to run model Wan2.2-T2V-A14B-LowNoise-Q5_0.gguf.
following errors occur:
pls help me thans god
time=2025-09-02T20:56:32.992+08:00 level=INFO source=routes.go:1331 msg="server config" env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:4096 OLLAMA_DEBUG:INFO OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\Users\suyon\.ollama\models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NEW_ESTIMATES:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES:]"
time=2025-09-02T20:56:33.036+08:00 level=INFO source=images.go:477 msg="total blobs: 2"
time=2025-09-02T20:56:33.036+08:00 level=INFO source=images.go:484 msg="total unused blobs removed: 0"
time=2025-09-02T20:56:33.038+08:00 level=INFO source=routes.go:1384 msg="Listening on 127.0.0.1:11434 (version 0.11.8)"
time=2025-09-02T20:56:33.039+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-09-02T20:56:33.039+08:00 level=INFO source=gpu_windows.go:167 msg=packages count=1
time=2025-09-02T20:56:33.039+08:00 level=INFO source=gpu_windows.go:214 msg="" package=0 cores=6 efficiency=0 threads=12
time=2025-09-02T20:56:33.092+08:00 level=INFO source=gpu.go:379 msg="no compatible GPUs were discovered"
time=2025-09-02T20:56:33.093+08:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="15.3 GiB" available="8.4 GiB"
time=2025-09-02T20:56:33.093+08:00 level=INFO source=routes.go:1425 msg="entering low vram mode" "total vram"="15.3 GiB" threshold="20.0 GiB"
[GIN] 2025/09/02 - 20:57:31 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/09/02 - 20:57:44 | 200 | 3.1661ms | 127.0.0.1 | POST "/api/blobs/sha256:9dbceecceb04c7dfce9f75330e7e0009e9baa86330cac2432a26a83bdd83b72d"
[GIN] 2025/09/02 - 20:57:44 | 200 | 12.6889ms | 127.0.0.1 | POST "/api/create"
[GIN] 2025/09/02 - 20:58:15 | 200 | 1.5346ms | 127.0.0.1 | HEAD "/"
[GIN] 2025/09/02 - 20:58:15 | 200 | 9.0115ms | 127.0.0.1 | POST "/api/show"
gguf_init_from_file_impl: tensor 'patch_embedding.weight' has invalid number of dimensions: 5 > 4
gguf_init_from_file_impl: failed to read tensor info
llama_model_load: error loading model: llama_model_loader: failed to load model from C:\Users\suyon.ollama\models\blobs\sha256-9dbceecceb04c7dfce9f75330e7e0009e9baa86330cac2432a26a83bdd83b72d
llama_model_load_from_file_impl: failed to load model
time=2025-09-02T20:58:15.995+08:00 level=INFO source=sched.go:420 msg="NewLlamaServer failed" model=C:\Users\suyon.ollama\models\blobs\sha256-9dbceecceb04c7dfce9f75330e7e0009e9baa86330cac2432a26a83bdd83b72d error="unable to load model: C:\Users\suyon\.ollama\models\blobs\sha256-9dbceecceb04c7dfce9f75330e7e0009e9baa86330cac2432a26a83bdd83b72d"
[GIN] 2025/09/02 - 20:58:15 | 500 | 20.5819ms | 127.0.0.1 | POST "/api/generate"
[GIN] 2025/09/02 - 21:00:22 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/09/02 - 21:00:34 | 200 | 533.2µs | 127.0.0.1 | POST "/api/blobs/sha256:9dbceecceb04c7dfce9f75330e7e0009e9baa86330cac2432a26a83bdd83b72d"
[GIN] 2025/09/02 - 21:00:34 | 200 | 11.9027ms | 127.0.0.1 | POST "/api/create"
[GIN] 2025/09/02 - 21:00:36 | 200 | 73.2µs | 127.0.0.1 | HEAD "/"
[GIN] 2025/09/02 - 21:00:36 | 200 | 6.7898ms | 127.0.0.1 | POST "/api/show"
gguf_init_from_file_impl: tensor 'patch_embedding.weight' has invalid number of dimensions: 5 > 4
gguf_init_from_file_impl: failed to read tensor info
llama_model_load: error loading model: llama_model_loader: failed to load model from C:\Users\suyon.ollama\models\blobs\sha256-9dbceecceb04c7dfce9f75330e7e0009e9baa86330cac2432a26a83bdd83b72d
llama_model_load_from_file_impl: failed to load model
time=2025-09-02T21:00:36.461+08:00 level=INFO source=sched.go:420 msg="NewLlamaServer failed" model=C:\Users\suyon.ollama\models\blobs\sha256-9dbceecceb04c7dfce9f75330e7e0009e9baa86330cac2432a26a83bdd83b72d error="unable to load model: C:\Users\suyon\.ollama\models\blobs\sha256-9dbceecceb04c7dfce9f75330e7e0009e9baa86330cac2432a26a83bdd83b72d"
[GIN] 2025/09/02 - 21:00:36 | 500 | 10.0783ms | 127.0.0.1 | POST "/api/generate"
[GIN] 2025/09/02 - 21:01:18 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/09/02 - 21:01:18 | 200 | 6.2995ms | 127.0.0.1 | POST "/api/show"
gguf_init_from_file_impl: tensor 'patch_embedding.weight' has invalid number of dimensions: 5 > 4
gguf_init_from_file_impl: failed to read tensor info
llama_model_load: error loading model: llama_model_loader: failed to load model from C:\Users\suyon.ollama\models\blobs\sha256-9dbceecceb04c7dfce9f75330e7e0009e9baa86330cac2432a26a83bdd83b72d
llama_model_load_from_file_impl: failed to load model
time=2025-09-02T21:01:18.798+08:00 level=INFO source=sched.go:420 msg="NewLlamaServer failed" model=C:\Users\suyon.ollama\models\blobs\sha256-9dbceecceb04c7dfce9f75330e7e0009e9baa86330cac2432a26a83bdd83b72d error="unable to load model: C:\Users\suyon\.ollama\models\blobs\sha256-9dbceecceb04c7dfce9f75330e7e0009e9baa86330cac2432a26a83bdd83b72d"
[GIN] 2025/09/02 - 21:01:18 | 500 | 8.9027ms | 127.0.0.1 | POST "/api/generate"
[GIN] 2025/09/02 - 21:01:41 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/09/02 - 21:01:41 | 200 | 6.3175ms | 127.0.0.1 | POST "/api/show"
gguf_init_from_file_impl: tensor 'patch_embedding.weight' has invalid number of dimensions: 5 > 4
gguf_init_from_file_impl: failed to read tensor info
llama_model_load: error loading model: llama_model_loader: failed to load model from C:\Users\suyon.ollama\models\blobs\sha256-9dbceecceb04c7dfce9f75330e7e0009e9baa86330cac2432a26a83bdd83b72d
llama_model_load_from_file_impl: failed to load model
time=2025-09-02T21:01:41.918+08:00 level=INFO source=sched.go:420 msg="NewLlamaServer failed" model=C:\Users\suyon.ollama\models\blobs\sha256-9dbceecceb04c7dfce9f75330e7e0009e9baa86330cac2432a26a83bdd83b72d error="unable to load model: C:\Users\suyon\.ollama\models\blobs\sha256-9dbceecceb04c7dfce9f75330e7e0009e9baa86330cac2432a26a83bdd83b72d"
[GIN] 2025/09/02 - 21:01:41 | 500 | 7.7801ms | 127.0.0.1 | POST "/api/generate"
[GIN] 2025/09/02 - 21:02:16 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/09/02 - 21:02:16 | 200 | 6.3027ms | 127.0.0.1 | POST "/api/show"
gguf_init_from_file_impl: tensor 'patch_embedding.weight' has invalid number of dimensions: 5 > 4
gguf_init_from_file_impl: failed to read tensor info
llama_model_load: error loading model: llama_model_loader: failed to load model from C:\Users\suyon.ollama\models\blobs\sha256-9dbceecceb04c7dfce9f75330e7e0009e9baa86330cac2432a26a83bdd83b72d
llama_model_load_from_file_impl: failed to load model
time=2025-09-02T21:02:17.001+08:00 level=INFO source=sched.go:420 msg="NewLlamaServer failed" model=C:\Users\suyon.ollama\models\blobs\sha256-9dbceecceb04c7dfce9f75330e7e0009e9baa86330cac2432a26a83bdd83b72d error="unable to load model: C:\Users\suyon\.ollama\models\blobs\sha256-9dbceecceb04c7dfce9f75330e7e0009e9baa86330cac2432a26a83bdd83b72d"
[GIN] 2025/09/02 - 21:02:17 | 500 | 8.2737ms | 127.0.0.1 | POST "/api/generate"
[GIN] 2025/09/02 - 21:03:42 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2025/09/02 - 21:03:42 | 200 | 6.3156ms | 127.0.0.1 | POST "/api/show"
gguf_init_from_file_impl: tensor 'patch_embedding.weight' has invalid number of dimensions: 5 > 4
gguf_init_from_file_impl: failed to read tensor info
llama_model_load: error loading model: llama_model_loader: failed to load model from C:\Users\suyon.ollama\models\blobs\sha256-9dbceecceb04c7dfce9f75330e7e0009e9baa86330cac2432a26a83bdd83b72d
llama_model_load_from_file_impl: failed to load model
time=2025-09-02T21:03:42.132+08:00 level=INFO source=sched.go:420 msg="NewLlamaServer failed" model=C:\Users\suyon.ollama\models\blobs\sha256-9dbceecceb04c7dfce9f75330e7e0009e9baa86330cac2432a26a83bdd83b72d error="unable to load model: C:\Users\suyon\.ollama\models\blobs\sha256-9dbceecceb04c7dfce9f75330e7e0009e9baa86330cac2432a26a83bdd83b72d"
[GIN] 2025/09/02 - 21:03:42 | 500 | 7.9628ms | 127.0.0.1 | POST "/api/generate"
Beta Was this translation helpful? Give feedback.
All reactions