-
Notifications
You must be signed in to change notification settings - Fork 14.1k
Closed
Labels
Description
Name and Version
Any llama.cpp version starting from b7348.
Operating systems
Linux
GGML backends
CPU
Hardware
any
Models
any
Problem description & steps to reproduce
I can't catch LLM output using llama-cli anymore, because there is no way to disable interactive mode for llama-cli anymore. I can use llama-completion, but it is a legacy binary, and I don't want to use it because in future releases it may be removed.
So I want to ask: how to catch the LLM output after you have merged #17824? Thank you.
First Bad Commit
Relevant log output
█ ██ ▀▀█▄ ███▄███▄ ▀▀█▄ ▄████ ████▄ ████▄
██ ██ ▄█▀██ ██ ██ ██ ▄█▀██ ██ ██ ██ ██ ██
██ ██ ▀█▄██ ██ ██ ██ ▀█▄██ ██ ▀████ ████▀ ████▀
██ ██
▀▀ ▀▀
build : b1062-34ce48d9
model : unsloth/Qwen3-Next-80B-A3B-Thinking-GGUF:Q8_K_XL
modalities : text
using custom system prompt
available commands:
/exit or Ctrl+C stop or exit
/regen regenerate the last response
/clear clear the chat history
/read add a text file