You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Having the same issue as seen in the above screenshots. I have tried with Qwen3.6 35B A3B, Gemma 4 26B, GPT OSS 20B, and GLM 4.7 Flash. Looking at another issue, I have tried disabling thinking as well, to no avail. Tried troubleshooting and experimenting with different prompts, mainly “do not add anything between the tool call and @@@VIZ-START”. Is this a model output problem or a tool problem? This is the latest Open Web UI; the backend is the latest commit from llama.cpp. OS is Arch Linux (fully updated). The plugin is the latest update from the repository. All models are from Hugging Face, using the built-in chat template. Any help is appreciated. Each individual image is a separate and new, clean chat.
Having the same issue as seen in the above screenshots. I have tried with Qwen3.6 35B A3B, Gemma 4 26B, GPT OSS 20B, and GLM 4.7 Flash. Looking at another issue, I have tried disabling thinking as well, to no avail. Tried troubleshooting and experimenting with different prompts, mainly “do not add anything between the tool call and
@@@VIZ-START”. Is this a model output problem or a tool problem? This is the latest Open Web UI; the backend is the latest commit from llama.cpp. OS is Arch Linux (fully updated). The plugin is the latest update from the repository. All models are from Hugging Face, using the built-in chat template. Any help is appreciated. Each individual image is a separate and new, clean chat.