Skip to contents

Call the Anthropic API to interact with Claude models

Usage

claude(
  .llm,
  .model = "claude-3-5-sonnet-20240620",
  .max_tokens = 1024,
  .temperature = NULL,
  .top_k = NULL,
  .top_p = NULL,
  .metadata = NULL,
  .stop_sequences = NULL,
  .tools = NULL,
  .api_url = "https://api.anthropic.com/",
  .verbose = FALSE,
  .wait = TRUE,
  .min_tokens_reset = 0L,
  .timeout = 60,
  .json = FALSE,
  .stream = FALSE,
  .dry_run = FALSE
)

Arguments

.llm

An existing LLMMessage object or an initial text prompt.

.model

The model identifier (default: "claude-3-5-sonnet-20240620").

.max_tokens

The maximum number of tokens to generate (default: 1024).

.temperature

Control for randomness in response generation (optional).

.top_k

Top k sampling parameter (optional).

.top_p

Nucleus sampling parameter (optional).

.metadata

Additional metadata for the request (optional).

.stop_sequences

Sequences that stop generation (optional).

.tools

Additional tools used by the model (optional).

.api_url

Base URL for the API (default: "https://api.anthropic.com/v1/messages").

.verbose

Should additional information be shown after the API call

.wait

Should we wait for rate limits if necessary?

.min_tokens_reset

How many tokens should be remaining to wait until we wait for token reset?

.timeout

Request timeout in seconds (default: 60).

.json

Should output be in JSON (default: FALSE).

.stream

Stream back the response piece by piece (default: FALSE).

.dry_run

If TRUE, perform a dry run and return the request object.

Value

Returns an updated LLMMessage object.