
Package index
-
llm_message() - Create or Update Large Language Model Message Object
-
df_llm_message() - Convert a Data Frame to an LLMMessage Object
-
get_reply()last_reply() - Retrieve Assistant Reply as Text
-
get_reply_data()last_reply_data() - Retrieve Assistant Reply as Structured Data
-
get_user_message()last_user_message() - Retrieve a User Message by Index
-
get_metadata()last_metadata() - Retrieve Metadata from Assistant Replies
-
get_logprobs() - Retrieve Log Probabilities from Assistant Replies
-
rate_limit_info() - Get the current rate limit information for all or a specific API
Tidyllm Main Verbs
Core verbs for interacting with LLMs: sending messages, generating embeddings, deep research, and managing batch requests.
-
chat() - Chat with a Language Model
-
embed() - Generate text embeddings
-
deep_research() - Run Deep Research via a Provider
-
check_job() - Check the Status of a Batch or Research Job
-
fetch_job() - Fetch Results from a Batch or Research Job
-
send_batch() - Send a batch of messages to a batch API
-
check_batch() - Check Batch Processing Status
-
fetch_batch() - Fetch Results from a Batch API
-
list_batches() - List all Batch Requests on a Batch API
-
list_models() - List Available Models for a Provider
-
tidyllm_schema() - Create a JSON Schema for Structured Outputs
-
field_chr()field_fct()field_dbl()field_lgl() - Define Field Descriptors for JSON Schema
-
field_object() - Define a nested object field
-
tidyllm_tool() - Create a Tool Definition for tidyllm
-
ellmer_tool() - Convert an ellmer Tool to a tidyllm TOOL
-
img() - Create an Image Object
API Provider Functions
Provider functions called from main verbs. Each provider exposes a consistent interface — pass to chat(), embed(), send_batch(), etc.
-
openai() - OpenAI Provider Function
-
claude() - Provider Function for Claude models on the Anthropic API
-
gemini() - Google Gemini Provider Function
-
groq() - Groq API Provider Function
-
mistral() - Mistral Provider Function
-
ollama() - Ollama API Provider Function
-
perplexity() - Perplexity Provider Function
-
deepseek() - Deepseek Provider Function
-
voyage() - Voyage Provider Function
-
openrouter() - OpenRouter Provider Function
-
llamacpp() - llama.cpp Provider Function
-
azure_openai() - Azure OpenAI Endpoint Provider Function
OpenAI-Specific Functions
Functions for OpenAI chat, batch processing, embeddings, and model listing.
-
openai_chat() - Send LLM Messages to the OpenAI Chat Completions API
-
openai_embedding() - Generate Embeddings Using OpenAI API
-
openai_list_models() - List Available Models from the OpenAI API
-
send_openai_batch() - Send a Batch of Messages to OpenAI Batch API
-
check_openai_batch() - Check Batch Processing Status for OpenAI Batch API
-
fetch_openai_batch() - Fetch Results for an OpenAI Batch
-
list_openai_batches() - List OpenAI Batch Requests
-
cancel_openai_batch() - Cancel an In-Progress OpenAI Batch
Claude-Specific Functions
Functions for Anthropic Claude chat, batch processing, and file management.
-
claude_chat() - Interact with Claude AI models via the Anthropic API
-
claude_list_models() - List Available Models from the Anthropic Claude API
-
send_claude_batch() - Send a Batch of Messages to Claude API
-
check_claude_batch() - Check Batch Processing Status for Claude API
-
fetch_claude_batch() - Fetch Results for a Claude Batch
-
list_claude_batches() - List Claude Batch Requests
-
claude_upload_file() - Upload a File to Claude API
-
claude_delete_file() - Delete a File from Claude API
-
claude_file_metadata() - Retrieve Metadata for a File from Claude API
-
claude_list_files() - List Files in Claude API
-
claude_websearch() - Builtin Claude Web Search Tool
Gemini-Specific Functions
Functions for Google Gemini chat, embeddings, file management, and batch processing.
-
gemini_chat() - Send LLMMessage to Gemini API
-
gemini_embedding() - Generate Embeddings Using the Google Gemini API
-
gemini_list_models() - List Available Models from the Google Gemini API
-
gemini_upload_file() - Upload a File to Gemini API
-
gemini_list_files() - List Files in Gemini API
-
gemini_file_metadata() - Retrieve Metadata for a File from Gemini API
-
gemini_delete_file() - Delete a File from Gemini API
-
send_gemini_batch() - Submit a list of LLMMessage objects to Gemini's batch API
-
check_gemini_batch() - Check the Status of a Gemini Batch Operation
-
fetch_gemini_batch() - Fetch Results for a Gemini Batch
-
list_gemini_batches() - List Recent Gemini Batch Operations
-
groq_chat() - Send LLM Messages to the Groq Chat API
-
groq_transcribe() - Transcribe an Audio File Using Groq transcription API
-
groq_list_models() - List Available Models from the Groq API
-
send_groq_batch() - Send a Batch of Messages to the Groq API
-
check_groq_batch() - Check Batch Processing Status for Groq API
-
fetch_groq_batch() - Fetch Results for a Groq Batch
-
list_groq_batches() - List Groq Batch Requests
-
mistral_chat() - Send LLMMessage to Mistral API
-
mistral_embedding() - Generate Embeddings Using Mistral API
-
mistral_list_models() - List Available Models from the Mistral API
-
send_mistral_batch() - Send a Batch of Requests to the Mistral API
-
check_mistral_batch() - Check Batch Processing Status for Mistral Batch API
-
fetch_mistral_batch() - Fetch Results for an Mistral Batch
-
list_mistral_batches() - List Mistral Batch Requests
Ollama-Specific Functions
Functions for local Ollama models: chat, embeddings, and model management.
-
ollama_chat() - Interact with local AI models via the Ollama API
-
ollama_embedding() - Generate Embeddings Using Ollama API
-
ollama_list_models() - Retrieve and return model information from the Ollama API
-
ollama_download_model() - Download a model from the Ollama API
-
ollama_delete_model() - Delete a model from the Ollama API
-
send_ollama_batch() - Send a Batch of Messages to Ollama API
-
perplexity_chat() - Send LLM Messages to the Perplexity Chat API
-
perplexity_deep_research() - Submit a Deep Research Request to Perplexity
-
perplexity_check_research() - Check the Status of a Perplexity Deep Research Job
-
perplexity_fetch_research() - Fetch Results from a Completed Perplexity Deep Research Job
-
deepseek_chat() - Send LLM Messages to the DeepSeek Chat API
-
voyage_embedding() - Generate Embeddings Using Voyage AI API
-
voyage_rerank() - Rerank Documents Using Voyage AI API
OpenRouter-Specific Functions
Functions for OpenRouter — access to 300+ models via a single API key.
-
openrouter_chat() - Send LLM Messages to the OpenRouter Chat API
-
openrouter_embedding() - Generate Embeddings Using the OpenRouter API
-
openrouter_list_models() - List Available Models on OpenRouter
-
openrouter_credits() - Get OpenRouter Credit Balance
-
openrouter_generation() - Get Details for an OpenRouter Generation
llama.cpp-Specific Functions
Functions for local llama.cpp servers: chat, embeddings, reranking, model management, and server utilities.
-
llamacpp_chat() - Send LLM Messages to a llama.cpp Server
-
llamacpp_embedding() - Generate Embeddings Using a llama.cpp Server
-
llamacpp_rerank() - Rerank Documents Using a llama.cpp Server
-
llamacpp_list_models() - List Models Loaded in the llama.cpp Server
-
llamacpp_health() - Check Health of the llama.cpp Server
-
llamacpp_list_local_models() - List Local GGUF Model Files
-
llamacpp_download_model() - Download a GGUF Model from Hugging Face
-
llamacpp_delete_model() - Delete a Local GGUF Model File
-
list_hf_gguf_files() - List GGUF Files Available in a Hugging Face Repository
-
azure_openai_chat() - Send LLM Messages to an Azure OpenAI Chat Completions endpoint
-
azure_openai_embedding() - Generate Embeddings Using OpenAI API on Azure
-
send_azure_openai_batch() - Send a Batch of Messages to Azure OpenAI Batch API
-
check_azure_openai_batch() - Check Batch Processing Status for Azure OpenAI Batch API
-
fetch_azure_openai_batch() - Fetch Results for an Azure OpenAI Batch
-
list_azure_openai_batches() - List Azure OpenAI Batch Requests
-
ellmer_tool() - Convert an ellmer Tool to a tidyllm TOOL
-
chat_ellmer() - Send LLM Messages to Ellmer Chat Object
-
ellmer() - Alias for the Ellmer Provider Function
-
pdf_page_batch() - Batch Process PDF into LLM Messages
-
LLMMessage() - Large Language Model Message Class