Call the OpenAI API to interact with ChatGPT or o-reasoning models
Source:R/api_functions.R
chatgpt.Rd
Call the OpenAI API to interact with ChatGPT or o-reasoning models
Usage
chatgpt(
.llm,
.model = "gpt-4o",
.max_tokens = 1024,
.temperature = NULL,
.top_p = NULL,
.top_k = NULL,
.frequency_penalty = NULL,
.presence_penalty = NULL,
.api_url = "https://api.openai.com/",
.timeout = 60,
.verbose = FALSE,
.wait = TRUE,
.json = FALSE,
.min_tokens_reset = 0L,
.stream = FALSE,
.dry_run = FALSE
)
Arguments
- .llm
An existing LLMMessage object or an initial text prompt.
- .model
The model identifier (default: "gpt-4o").
- .max_tokens
The maximum number of tokens to generate (default: 1024).
- .temperature
Control for randomness in response generation (optional).
- .top_p
Nucleus sampling parameter (optional).
- .top_k
Top k sampling parameter (optional).
- .frequency_penalty
Controls repetition frequency (optional).
- .presence_penalty
Controls how much to penalize repeating content (optional)
- .api_url
Base URL for the API (default: https://api.openai.com/v1/completions).
- .timeout
Request timeout in seconds (default: 60).
- .verbose
Should additional information be shown after the API call
- .wait
Should we wait for rate limits if necessary?
- .json
Should output be in JSON mode (default: FALSE).
- .min_tokens_reset
How many tokens should be remaining to wait until we wait for token reset?
- .stream
Stream back the response piece by piece (default: FALSE).
- .dry_run
If TRUE, perform a dry run and return the request object.