Send LLMMessage to Gemini API
Usage
gemini_chat(
.llm,
.model = "gemini-1.5-flash",
.fileid = NULL,
.temperature = NULL,
.max_output_tokens = NULL,
.top_p = NULL,
.top_k = NULL,
.presence_penalty = NULL,
.frequency_penalty = NULL,
.stop_sequences = NULL,
.safety_settings = NULL,
.tools = NULL,
.tool_config = NULL,
.json_schema = NULL,
.timeout = 120,
.dry_run = FALSE,
.max_tries = 3,
.verbose = FALSE,
.stream = FALSE
)
Arguments
- .llm
An existing LLMMessage object or an initial text prompt.
- .model
The model identifier (default: "gemini-1.5-flash").
- .fileid
Optional file name for a file uploaded via
gemini_upload_file()
(default: NULL)- .temperature
Controls randomness in generation (default: NULL, range: 0.0-2.0).
- .max_output_tokens
Maximum tokens in the response (default: NULL).
- .top_p
Controls nucleus sampling (default: NULL, range: 0.0-1.0).
- .top_k
Controls diversity in token selection (default: NULL, range: 0 or more).
- .presence_penalty
Penalizes new tokens (default: NULL, range: -2.0 to 2.0).
- .frequency_penalty
Penalizes frequent tokens (default: NULL, range: -2.0 to 2.0).
- .stop_sequences
Optional character sequences to stop generation (default: NULL, up to 5).
- .safety_settings
A list of safety settings (default: NULL).
- .tools
Optional tools for function calling or code execution (default: NULL).
- .tool_config
Optional configuration for the tools specified (default: NULL).
- .json_schema
A JSON schema object as R list to enforce the output structure
- .timeout
When should our connection time out (default: 120 seconds).
- .dry_run
If TRUE, perform a dry run and return the request object.
- .max_tries
Maximum retries to perform request (default: 3).
- .verbose
Should additional information be shown after the API call.
- .stream
Should the response be streamed (default: FALSE).