Skip to contents

This function sends a message history to the Perplexity Chat API and returns the assistant's reply.

Usage

perplexity_chat(
  .llm,
  .model = "llama-3.1-sonar-small-128k-online",
  .max_tokens = 1024,
  .temperature = NULL,
  .top_p = NULL,
  .frequency_penalty = NULL,
  .presence_penalty = NULL,
  .stop = NULL,
  .search_domain_filter = NULL,
  .return_images = FALSE,
  .search_recency_filter = NULL,
  .api_url = "https://api.perplexity.ai/",
  .json = FALSE,
  .timeout = 60,
  .verbose = FALSE,
  .stream = FALSE,
  .dry_run = FALSE,
  .max_tries = 3
)

Arguments

.llm

An LLMMessage object containing the conversation history.

.model

The identifier of the model to use (default: "llama-3.2-11b-vision-preview").

.max_tokens

The maximum number of tokens that can be generated in the response (default: 1024).

.temperature

Controls the randomness in the model's response. Values between 0 (exclusive) and 2 (exclusive) are allowed, where higher values increase randomness (optional).

.top_p

Nucleus sampling parameter that controls the proportion of probability mass considered. Values between 0 (exclusive) and 1 (exclusive) are allowed (optional).

.frequency_penalty

Number greater than 0. Values > 1.0 penalize repeated tokens, reducing the likelihood of repetition (optional).

.presence_penalty

Number between -2.0 and 2.0. Positive values encourage new topics by penalizing tokens that have appeared so far (optional).

.stop

One or more sequences where the API will stop generating further tokens. Can be a string or a list of strings (optional).

.search_domain_filter

A vector of domains to limit or exclude from search results. For exclusion, prefix domains with a "-" (optional, currently in closed beta).

.return_images

Logical; if TRUE, enables returning images from the model's response (default: FALSE, currently in closed beta).

.search_recency_filter

Limits search results to a specific time interval (e.g., "month", "week", "day", or "hour"). Only applies to online models (optional).

.api_url

Base URL for the Perplexity API (default: "https://api.perplexity.ai/").

.json

Whether the response should be structured as JSON (default: FALSE).

.timeout

Request timeout in seconds (default: 60).

.verbose

If TRUE, displays additional information after the API call, including rate limit details (default: FALSE).

.stream

Logical; if TRUE, streams the response piece by piece (default: FALSE).

.dry_run

If TRUE, performs a dry run and returns the constructed request object without executing it (default: FALSE).

.max_tries

Maximum retries to perform the request (default: 3).

Value

A new LLMMessage object containing the original messages plus the assistant's response.