Skip to contents

Send LLMMessage to Mistral API

Usage

mistral(
  .llm,
  .model = "mistral-large-latest",
  .stream = FALSE,
  .seed = NULL,
  .json = FALSE,
  .temperature = NULL,
  .timeout = 120,
  .wait = TRUE,
  .min_tokens_reset = 0L,
  .max_tokens = 1024,
  .min_tokens = NULL,
  .dry_run = FALSE,
  .verbose = FALSE
)

Arguments

.llm

An existing LLMMessage object or an initial text prompt.

.model

The model identifier (default: "mistral-large-latest").

.stream

Should the answer be streamed to console as it comes (optional)

.seed

Which seed should be used for random numbers (optional).

.json

Should output be structured as JSON (default: FALSE).

.temperature

Control for randomness in response generation (optional).

.timeout

When should our connection time out (default: 120 seconds).

.wait

Should we wait for rate limits if necessary?

.min_tokens_reset

How many tokens should be remaining to wait until we wait for token reset?

.max_tokens

Maximum number of tokens for response (default: 1024).

.min_tokens

Minimum number of tokens for response (optional).

.dry_run

If TRUE, perform a dry run and return the request object.

.verbose

Should additional information be shown after the API call

Value

Returns an updated LLMMessage object.