Replies: 1 comment
-
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi everyone 👋
I noticed something small but useful that could improve CLI usability.
When using llama-mtmd-cli.exe, initialization messages go to StandardError, and model replies go to StandardOutput — perfect.
But when I run the /image [ImagePath] command, all the image-processing logs (like “encoding image slice…” and “decoding image batch…”) are also printed to StandardOutput, mixed with the assistant’s reply.
Example in terminal:
Would it be possible to redirect those internal image-processing logs to StandardError (or another stream)?
That would keep StandardOutput clean and make it easier to parse or display only the model’s actual response in chat-based UIs.
Small tweak — big quality-of-life improvement for integrations.
Thanks for all your amazing work on llama.cpp! 🙏
Beta Was this translation helpful? Give feedback.
All reactions