Skip to contents

The llamacpp() provider connects tidyllm to a locally running llama.cpp server. It exposes the same verb/provider pattern as every other tidyllm provider while also offering llama.cpp-specific features: BNF grammar constraints (.grammar), token logprobs (.logprobs), and model management helpers.

The server must be started separately before calling llamacpp(). See llamacpp_health() to verify the server is running, and llamacpp_download_model() / list_hf_gguf_files() to obtain models.

Usage

llamacpp(..., .called_from = NULL)

Arguments

...

Parameters passed to the appropriate llama.cpp-specific function.

.called_from

An internal argument specifying which verb invoked this function. Managed automatically by tidyllm verbs; do not set manually.

Value

The result of the requested action (e.g., an updated LLMMessage for chat(), a tibble for embed() or list_models()).