Skip to contents

Send LLMMessage to Mistral API

Usage

mistral_chat(
  .llm,
  .model = "mistral-large-latest",
  .stream = FALSE,
  .seed = NULL,
  .json = FALSE,
  .temperature = 0.7,
  .top_p = 1,
  .stop = NULL,
  .safe_prompt = FALSE,
  .timeout = 120,
  .max_tries = 3,
  .max_tokens = 1024,
  .min_tokens = NULL,
  .dry_run = FALSE,
  .verbose = FALSE
)

Arguments

.llm

An LLMMessage object.

.model

The model identifier to use (default: "mistral-large-latest").

.stream

Whether to stream back partial progress to the console. (default: FALSE).

.seed

The seed to use for random sampling. If set, different calls will generate deterministic results (optional).

.json

Whether the output should be in JSON mode(default: FALSE).

.temperature

Sampling temperature to use, between 0.0 and 1.5. Higher values make the output more random, while lower values make it more focused and deterministic (default: 0.7).

.top_p

Nucleus sampling parameter, between 0.0 and 1.0. The model considers tokens with top_p probability mass (default: 1).

.stop

Stop generation if this token is detected, or if one of these tokens is detected when providing a list (optional).

.safe_prompt

Whether to inject a safety prompt before all conversations (default: FALSE).

.timeout

When should our connection time out in seconds (default: 120).

.max_tries

Maximum retries to peform request

.max_tokens

The maximum number of tokens to generate in the completion. Must be >= 0 (default: 1024).

.min_tokens

The minimum number of tokens to generate in the completion. Must be >= 0 (optional).

.dry_run

If TRUE, perform a dry run and return the request object (default: FALSE).

.verbose

Should additional information be shown after the API call? (default: FALSE)

Value

Returns an updated LLMMessage object.