Skip to contents

Send a Batch of Requests to the Mistral API

Usage

send_mistral_batch(
  .llms,
  .model = "mistral-small-latest",
  .endpoint = "/v1/chat/completions",
  .metadata = NULL,
  .temperature = 0.7,
  .top_p = 1,
  .max_tokens = 1024,
  .min_tokens = NULL,
  .seed = NULL,
  .stop = NULL,
  .dry_run = FALSE,
  .overwrite = FALSE,
  .max_tries = 3,
  .timeout = 60,
  .id_prefix = "tidyllm_mistral_req_"
)

Arguments

.llms

A list of LLMMessage objects containing conversation histories.

.model

The Mistral model version (default: "mistral-small-latest").

.endpoint

The API endpoint (default: "/v1/chat/completions").

.metadata

Optional metadata for the batch.

.temperature

Sampling temperature to use, between 0.0 and 1.5. Higher values make the output more random (default: 0.7).

.top_p

Nucleus sampling parameter, between 0.0 and 1.0 (default: 1).

.max_tokens

The maximum number of tokens to generate in the completion (default: 1024).

.min_tokens

The minimum number of tokens to generate (optional).

.seed

Random seed for deterministic outputs (optional).

.stop

Stop generation at specific tokens or strings (optional).

.dry_run

Logical; if TRUE, returns the prepared request without executing it (default: FALSE).

.overwrite

Logical; if TRUE, allows overwriting existing custom IDs (default: FALSE).

.max_tries

Maximum retry attempts for requests (default: 3).

.timeout

Request timeout in seconds (default: 60).

.id_prefix

Prefix for generating custom IDs (default: "tidyllm_mistral_req_").

Value

The prepared_llms list with the batch_id attribute attached.