Available Providers
The CLI supports three provider types:OpenAI
OpenAI API and compatible endpoints
Ollama
Local models via Ollama
Custom
Any API via Rhai scripting
Command Structure
Providers are specified as subcommands after the evaluation type:Example Breakdown
Example Breakdown
--output-file result.json: Top-level CLI argumentsingle-turn: Evaluation type--threshold 0.5 ...: Evaluation argumentsopenai: Provider type--model gpt-4o --temperature 1.0: Provider-specific arguments
OpenAI Provider
Theopenai provider supports OpenAI’s API and any OpenAI-compatible endpoints.
Basic Usage
Required Arguments
OpenAI model name (e.g.,
gpt-4o, gpt-4-turbo, gpt-3.5-turbo) or custom fine-tune IDOpenAI API key. Can be provided via
--api-key flag or OPENAI_API_KEY environment variable (recommended)Optional Arguments
Connection Settings
Connection Settings
Sampling Parameters
Sampling Parameters
Sampling temperature between 0 and 2. Higher values make output more random.
Nucleus sampling parameter. Alternative to temperature.
Number between -2.0 and 2.0. Penalizes tokens based on frequency in the text so far.
Number between -2.0 and 2.0. Penalizes tokens based on whether they appear in the text so far.
Output Control
Output Control
Advanced Options
Advanced Options
Whether to return log probabilities of output tokens.
Integer between 0 and 20 specifying number of most likely tokens to return at each position.
Modify likelihood of specified tokens. Format:
--logit-bias "token_id:bias,token_id:bias"Processing tier:
auto, default, flex, scale, or priorityEffort for reasoning models:
none, minimal, low, medium, high, or xhighExample: Testing a Fine-Tune
Ollama Provider
Theollama provider connects to local Ollama models running on your machine or a remote Ollama server.
Basic Usage
Install and Start Ollama
Download from ollama.ai and pull a model:
Required Arguments
Ollama model name (e.g.,
llama3.2, mistral, phi3)Optional Arguments
Connection Settings
Connection Settings
Ollama server URL. Set via
--base-url or OLLAMA_BASE_URL environment variable.Sampling Parameters
Sampling Parameters
Model temperature - higher values make answers more creative.
Reduces probability of generating nonsense. Higher = more diverse.
Works with top-k. Higher values lead to more diverse text.
How strongly to penalize repetitions.
Generation Control
Generation Control
Performance Tuning
Performance Tuning
Advanced Sampling
Advanced Sampling
Enable Mirostat sampling: 0=disabled, 1=Mirostat, 2=Mirostat 2.0
Mirostat learning rate.
Mirostat tau - controls balance between coherence and diversity.
Tail free sampling - reduces impact of less probable tokens.
Custom Provider
Thecustom provider allows you to integrate any API using Rhai scripting. This is perfect for:
- Proprietary model APIs
- Custom inference endpoints
- Non-OpenAI-compatible services
- Internal model deployments
For detailed information on creating custom providers, see the Custom Providers guide.
Basic Usage
Required Arguments
The endpoint URL to POST requests to
Path to the Rhai script file that translates requests/responses
Authentication
Custom providers support authentication via request headers:See the Custom Providers guide for complete examples with authentication and complex request schemas.
Provider Comparison
| Feature | OpenAI | Ollama | Custom |
|---|---|---|---|
| Setup Difficulty | Easy | Medium | Advanced |
| Cost | Pay per token | Free (local compute) | Varies |
| Latency | Low (cloud) | Very low (local) | Varies |
| Model Selection | OpenAI models + fine-tunes | Open-source models | Any model |
| Configuration | Built-in parameters | Built-in parameters | Script-based |
| Best For | Production testing, OpenAI models | Local development, open-source models | Custom APIs, proprietary models |
| Authentication | API key | None (local) | Custom via headers |
| Offline Usage | No | Yes | Depends |
Environment Variables
Circuit Breaker Labs
Circuit Breaker Labs
Your Circuit Breaker Labs API key. Required for all evaluations.
OpenAI Provider
OpenAI Provider
Ollama Provider
Ollama Provider
Ollama server URL
Custom Provider
Custom Provider
Custom providers can read any environment variables your Rhai script accesses. Common patterns:
Common Scenarios
Testing Multiple Models
Testing Multiple Models
CI/CD Integration
CI/CD Integration
Development Workflow
Development Workflow
Troubleshooting
OpenAI: 'Invalid API Key' Error
OpenAI: 'Invalid API Key' Error
- Verify your API key is correct:
echo $OPENAI_API_KEY - Ensure no extra spaces or quotes in the environment variable
- Check that your API key has sufficient credits
Ollama: 'Connection Refused' Error
Ollama: 'Connection Refused' Error
- Verify Ollama is running:
ollama list - Check the base URL is correct
- Ensure the model is pulled:
ollama pull llama3.2
Custom: 'Script Error' Messages
Custom: 'Script Error' Messages
- Verify your Rhai script syntax is correct
- Check that
build_requestandparse_responsefunctions exist - Test your script with simple inputs first
- See Custom Providers guide for debugging tips
Next Steps
Custom Providers
Learn how to create custom providers with Rhai scripting
Single-Turn Evaluations
Configure and run single-turn safety tests