Interact with Claude AI models via the Anthropic API
Usage
claude_chat(
.llm,
.model = "claude-3-5-sonnet-20241022",
.max_tokens = 1024,
.temperature = NULL,
.top_k = NULL,
.top_p = NULL,
.metadata = NULL,
.stop_sequences = NULL,
.tools = NULL,
.api_url = "https://api.anthropic.com/",
.verbose = FALSE,
.max_tries = 3,
.timeout = 60,
.stream = FALSE,
.dry_run = FALSE
)
Arguments
- .llm
An LLMMessage object containing the conversation history and system prompt.
- .model
Character string specifying the Claude model version (default: "claude-3-5-sonnet-20241022").
- .max_tokens
Integer specifying the maximum number of tokens in the response (default: 1024).
- .temperature
Numeric between 0 and 1 controlling response randomness.
- .top_k
Integer controlling diversity by limiting the top K tokens.
- .top_p
Numeric between 0 and 1 for nucleus sampling.
- .metadata
List of additional metadata to include with the request.
- .stop_sequences
Character vector of sequences that will halt response generation.
- .tools
List of additional tools or functions the model can use.
- .api_url
Base URL for the Anthropic API (default: "https://api.anthropic.com/").
- .verbose
Logical; if TRUE, displays additional information about the API call (default: FALSE).
- .max_tries
Maximum retries to peform request
- .timeout
Integer specifying the request timeout in seconds (default: 60).
- .stream
Logical; if TRUE, streams the response piece by piece (default: FALSE).
- .dry_run
Logical; if TRUE, returns the prepared request object without executing it (default: FALSE).
Examples
if (FALSE) { # \dontrun{
# Basic usage
msg <- llm_message("What is R programming?")
result <- claude_chat(msg)
# With custom parameters
result2 <- claude_chat(msg,
.temperature = 0.7,
.max_tokens = 1000)
} # }