Skip to contents

This function creates and submits a batch of messages to the Azure OpenAI Batch API for asynchronous processing.

Usage

send_azure_openai_batch(
  .llms,
  .deployment = "gpt-4o-mini",
  .endpoint_url = Sys.getenv("AZURE_ENDPOINT_URL"),
  .api_version = "2024-10-01-preview",
  .max_completion_tokens = NULL,
  .frequency_penalty = NULL,
  .logit_bias = NULL,
  .logprobs = FALSE,
  .top_logprobs = NULL,
  .presence_penalty = NULL,
  .seed = NULL,
  .stop = NULL,
  .temperature = NULL,
  .top_p = NULL,
  .dry_run = FALSE,
  .overwrite = FALSE,
  .max_tries = 3,
  .timeout = 60,
  .verbose = FALSE,
  .json_schema = NULL,
  .id_prefix = "tidyllm_azure_openai_req_"
)

Arguments

.llms

An LLMMessage object containing the conversation history.

.deployment

The identifier of the model that is deployed (default: "gpt-4o-mini").

.endpoint_url

Base URL for the API (default: Sys.getenv("AZURE_ENDPOINT_URL")).

.api_version

Which version of the API is deployed (default: "2024-10-01-preview")

.max_completion_tokens

An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.

.frequency_penalty

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far.

.logit_bias

A named list modifying the likelihood of specified tokens appearing in the completion.

.logprobs

Whether to return log probabilities of the output tokens (default: FALSE).

.top_logprobs

An integer between 0 and 20 specifying the number of most likely tokens to return at each token position.

.presence_penalty

Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far.

.seed

If specified, the system will make a best effort to sample deterministically.

.stop

Up to 4 sequences where the API will stop generating further tokens.

.temperature

What sampling temperature to use, between 0 and 2. Higher values make the output more random.

.top_p

An alternative to sampling with temperature, called nucleus sampling.

.dry_run

If TRUE, perform a dry run and return the request object (default: FALSE).

.overwrite

Logical; if TRUE, allows overwriting an existing batch ID (default: FALSE).

.max_tries

Maximum number of retries to perform the request (default: 3).

.timeout

Request timeout in seconds (default: 60).

.verbose

Logical; if TRUE, additional info about the requests is printed (default: FALSE).

.json_schema

A JSON schema object as R list to enforce the output structure (default: NULL).

.id_prefix

Character string to specify a prefix for generating custom IDs when names in .llms are missing (default: "tidyllm_openai_req_").

Value

An updated and named list of .llms with identifiers that align with batch responses, including a batch_id attribute.