Calls the /health endpoint of a running llama.cpp server.
Returns the status string ("ok", "loading model", "no model loaded",
or "error") along with the full parsed response body as a named list.
Usage
llamacpp_health(
.server = Sys.getenv("LLAMACPP_SERVER", "http://localhost:8080"),
.timeout = 10
)