Skip to content

Eval bug: It is not possible to catch LLM output anymore #17933

@andrew-aladev

Description

@andrew-aladev

Name and Version

Any llama.cpp version starting from b7348.

Operating systems

Linux

GGML backends

CPU

Hardware

any

Models

any

Problem description & steps to reproduce

I can't catch LLM output using llama-cli anymore, because there is no way to disable interactive mode for llama-cli anymore. I can use llama-completion, but it is a legacy binary, and I don't want to use it because in future releases it may be removed.

So I want to ask: how to catch the LLM output after you have merged #17824? Thank you.

First Bad Commit

#17824 @ngxson

Relevant log output

█ ██  ▀▀█▄ ███▄███▄  ▀▀█▄    ▄████ ████▄ ████▄                                                         
██ ██ ▄█▀██ ██ ██ ██ ▄█▀██    ██    ██ ██ ██ ██                                                                                                                                                                  
██ ██ ▀█▄██ ██ ██ ██ ▀█▄██ ██ ▀████ ████▀ ████▀                                                                                                                                                                  
                                    ██    ██                                                                                                                                                                     
                                    ▀▀    ▀▀                                                                                                                                                                     
                                                                                                                                                                                                                 
build      : b1062-34ce48d9                                                                                                                                                                                      
model      : unsloth/Qwen3-Next-80B-A3B-Thinking-GGUF:Q8_K_XL                                                                                                                                                    
modalities : text                                                                                                                                                                                                
using custom system prompt                                                                                                                                                                                       
                                                                                                                                                                                                                 
available commands:                                                                                                                                                                                              
  /exit or Ctrl+C     stop or exit                                                                                                                                                                               
  /regen              regenerate the last response                                                                                                                                                               
  /clear              clear the chat history                                                                                                                                                                     
  /read               add a text file

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions