Send LLMMessage to ollama API
Usage
ollama(
.llm,
.model = "llama3",
.stream = FALSE,
.seed = NULL,
.json = FALSE,
.temperature = NULL,
.num_ctx = 2048,
.ollama_server = "http://localhost:11434",
.timeout = 120,
.dry_run = FALSE
)
Arguments
- .llm
An existing LLMMessage object or an initial text prompt.
- .model
The model identifier (default: "llama3").
- .stream
Should the answer be streamed to console as it comes (optional)
- .seed
Which seed should be used for random numbers (optional).
- .json
Should output be structured as JSON (default: FALSE).
- .temperature
Control for randomness in response generation (optional).
- .num_ctx
The size of the context window in tokens (optional)
- .ollama_server
The URL of the ollama server to be used
- .timeout
When should our connection time out
- .dry_run
If TRUE, perform a dry run and return the request object.