Skip to contents

Generate Embeddings Using Ollama API

Usage

ollama_embedding(
  .llm,
  .model = "all-minilm",
  .truncate = TRUE,
  .ollama_server = "http://localhost:11434",
  .timeout = 120,
  .dry_run = FALSE
)

Arguments

.llm

An existing LLMMessage object (or a charachter vector of texts to embed)

.model

The embedding model identifier (default: "all-minilm").

.truncate

Whether to truncate inputs to fit the model's context length (default: TRUE).

.ollama_server

The URL of the Ollama server to be used (default: "http://localhost:11434").

.timeout

Timeout for the API request in seconds (default: 120).

.dry_run

If TRUE, perform a dry run and return the request object.

Value

A matrix where each column corresponds to the embedding of a message in the message history.