Skip to main content
Custom providers allow you to integrate any AI model API with the Circuit Breaker Labs CLI using Rhai scripting. This enables safety testing for proprietary models, internal deployments, or any non-standard API.

What is Rhai?

Rhai is a simple, embedded scripting language designed for Rust applications. It has a JavaScript-like syntax and is used by the CLI to translate between the standard Circuit Breaker Labs message format and your custom API’s format.
You don’t need to be a Rhai expert to create custom providers. The examples below cover all common use cases.

Why Rhai for Custom Providers?

Sandboxed Execution

Scripts run in a secure sandbox with no file system or network access

Simple Syntax

JavaScript-like syntax that’s easy to learn and read

Type Safety

Strong typing prevents runtime errors

Fast Performance

Compiled to bytecode for efficient execution

How Custom Providers Work

Custom providers act as translators between the CLI and your API:
1

CLI Prepares Messages

The CLI generates conversation messages in a standard format:
[
  {"role": "user", "content": "Hello!"},
  {"role": "assistant", "content": "Hi there!"},
  {"role": "user", "content": "How are you?"}
]
2

build_request() Transforms

Your Rhai script’s build_request() function converts these messages into your API’s request format.
3

CLI Posts Request

The CLI sends the transformed request to your specified URL endpoint.
4

parse_response() Extracts

Your script’s parse_response() function extracts the assistant’s message from the API response.
5

Evaluation Continues

The CLI uses the extracted message for safety evaluation and continues the conversation if needed.

Script Structure

Every custom provider script must implement two functions:
// Build the request body that will be POST'd to the endpoint
// messages: array of #{role: String, content: String}
// returns: a map that will be serialized to JSON
fn build_request(messages) {
    // Transform messages into your API's format
    #{
        "model": "your-model-name",
        "messages": messages
    }
}

// Parse the response body and extract the assistant's message
// body: the full deserialized JSON response as a Rhai dynamic
// returns: String containing the assistant's message content
fn parse_response(body) {
    // Extract the assistant's message from your API's response
    body["choices"][0]["message"]["content"].to_string()
}
Both functions are required. The CLI will fail if either is missing or has the wrong signature.

Complete Examples from Source

The CLI repository includes working examples for common API formats:

OpenAI Chat Completions API

// Example OpenAI-compatible provider script
// This script implements the OpenAI chat completions API spec

// Build the request body that will be POST'd to the endpoint
// messages: array of #{role: String, content: String}
// returns: a map that will be serialized to JSON
fn build_request(messages) {
    #{
        "model": "gpt-4o",
        "messages": messages
    }
}

// Parse the response body and extract the assistant's message
// body: the full deserialized JSON response as a Rhai dynamic
// returns: String containing the assistant's message content
fn parse_response(body) {
    body["choices"][0]["message"]["content"].to_string()
}
Usage:
cbl single-turn \
    --threshold 0.5 \
    --variations 2 \
    --maximum-iteration-layers 2 \
    custom \
    --url https://api.openai.com/v1/chat/completions \
    --script ./examples/providers/openai_completions.rhai

OpenAI Responses API

// Example OpenAI-compatible provider script
// This script implements the OpenAI responses API spec

// Build the request body that will be POST'd to the endpoint
// messages: array of #{role: String, content: String}
// returns: a map that will be serialized to JSON
fn build_request(messages) {
    #{
        "model": "gpt-4o",
        "input": messages
    }
}

// Parse the response body and extract the assistant's message
// body: the full deserialized JSON response as a Rhai dynamic
// returns: String containing the assistant's message content
fn parse_response(body) {
    body["output"][0]["content"][0]["text"].to_string()
}

Ollama Chat API

// Example Ollama provider script
// This script implements the Ollama chat API spec

// Build the request body that will be POST'd to the endpoint
// messages: array of #{role: String, content: String}
// returns: a map that will be serialized to JSON
fn build_request(messages) {
    #{
        "model": "llama3.2",
        "messages": messages,
        "stream": false
    }
}

// Parse the response body and extract the assistant's message
// body: the full deserialized JSON response as a Rhai dynamic
// returns: String containing the assistant's message content
fn parse_response(body) {
    body["message"]["content"].to_string()
}
Usage:
cbl single-turn \
    --threshold 0.5 \
    --variations 2 \
    --maximum-iteration-layers 2 \
    custom \
    --url http://localhost:11434/api/chat \
    --script ./examples/providers/ollama_chat.rhai

Ollama Completions API

// Example Ollama provider script
// This script implements the Ollama completions API spec

// Build the request body that will be POST'd to the endpoint
// messages: array of #{role: String, content: String}
// returns: a map that will be serialized to JSON
fn build_request(messages) {
    #{
        "model": "llama3.2",
        "messages": messages,
    }
}

// Parse the response body and extract the assistant's message
// body: the full deserialized JSON response as a Rhai dynamic
// returns: String containing the assistant's message content
fn parse_response(body) {
    body["choices"][0]["message"]["content"].to_string()
}

Creating Your Own Provider

1

Identify Your API Format

Determine what request format your API expects and what response format it returns. Test with curl:
curl -X POST https://your-api.com/completions \
  -H "Authorization: Bearer YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "your-model",
    "messages": [{"role": "user", "content": "Hello"}]
  }'
2

Create Rhai Script

Create a .rhai file with build_request() and parse_response() functions:
fn build_request(messages) {
    // Your request transformation here
    #{
        "your_field": messages
    }
}

fn parse_response(body) {
    // Your response parsing here
    body["your_response_field"].to_string()
}
3

Test the Script

Run a simple evaluation to verify the script works:
cbl single-turn \
    --threshold 0.5 \
    --variations 1 \
    --maximum-iteration-layers 1 \
    custom \
    --url https://your-api.com/endpoint \
    --script ./your-provider.rhai
4

Iterate and Refine

Check error messages for issues with request/response parsing and adjust your script accordingly.

Advanced Examples

Adding Custom Parameters

fn build_request(messages) {
    #{
        "model": "your-model",
        "messages": messages,
        "temperature": 0.7,
        "max_tokens": 1000,
        "top_p": 0.9,
        "frequency_penalty": 0.0,
        "presence_penalty": 0.0
    }
}

fn parse_response(body) {
    body["choices"][0]["message"]["content"].to_string()
}

Handling Different Message Formats

If your API expects a different message structure:
fn build_request(messages) {
    // Transform standard format into custom format
    let transformed = [];
    
    for msg in messages {
        transformed.push(#{
            "speaker": msg.role,  // "role" -> "speaker"
            "text": msg.content    // "content" -> "text"
        });
    }
    
    #{
        "model": "your-model",
        "conversation": transformed
    }
}

fn parse_response(body) {
    body["response"]["text"].to_string()
}

Concatenating Messages for Completion APIs

Some APIs expect a single prompt instead of structured messages:
fn build_request(messages) {
    // Convert message array to single prompt string
    let prompt = "";
    
    for msg in messages {
        if msg.role == "user" {
            prompt += "User: " + msg.content + "\n";
        } else if msg.role == "assistant" {
            prompt += "Assistant: " + msg.content + "\n";
        } else if msg.role == "system" {
            prompt += "System: " + msg.content + "\n";
        }
    }
    
    prompt += "Assistant: ";
    
    #{
        "model": "your-model",
        "prompt": prompt
    }
}

fn parse_response(body) {
    body["completion"].to_string()
}

Handling Nested Response Structures

fn parse_response(body) {
    // Navigate deeply nested response structure
    let result = body["data"]["results"][0]["output"]["message"];
    
    // Handle optional fields with default values
    if result == () {
        return "[No response generated]";
    }
    
    result.to_string()
}

Adding Debug Logging

fn build_request(messages) {
    print("Building request for " + messages.len() + " messages");
    
    let request = #{
        "model": "your-model",
        "messages": messages
    };
    
    print("Request built successfully");
    request
}

fn parse_response(body) {
    print("Parsing response: " + body);
    
    let content = body["choices"][0]["message"]["content"].to_string();
    
    print("Extracted content: " + content);
    content
}
Debug output appears in the CLI logs. Use this to troubleshoot request/response issues.

Rhai Quick Reference

Data Types

// String
let name = "value";

// Integer
let count = 42;

// Float
let temperature = 0.7;

// Boolean
let enabled = true;

// Array
let items = [1, 2, 3];

// Map (object)
let obj = #{
    "key": "value",
    "number": 42
};

Control Flow

// If statement
if condition {
    // code
} else {
    // code
}

// For loop
for item in array {
    // code
}

// While loop
while condition {
    // code
}

Common Operations

// String concatenation
let full = "Hello " + "World";

// Array operations
array.push(item);
let length = array.len();

// Map access
let value = map["key"];
map["new_key"] = "new_value";

// Type conversion
let str = value.to_string();

Functions

// Function definition
fn my_function(param1, param2) {
    // code
    return result;
}

// Function call
let result = my_function("arg1", "arg2");

Authentication and Headers

Authentication is typically handled via HTTP headers passed from environment variables:
# Set your API key
export CUSTOM_API_KEY="your-api-key"

# Run with custom provider
cbl single-turn \
    --threshold 0.5 \
    --variations 2 \
    --maximum-iteration-layers 2 \
    custom \
    --url https://your-api.com/completions \
    --script ./provider.rhai
The CLI automatically includes standard headers. Your API key should be configured according to your API’s authentication requirements (Bearer token, API key header, etc.).
If you need custom headers, they can be set at the HTTP client level. Contact the Circuit Breaker Labs team if you need advanced header customization.

Troubleshooting

Your script is missing the build_request function. Ensure it’s defined:
fn build_request(messages) {
    // implementation
}
Your script is missing the parse_response function. Ensure it’s defined:
fn parse_response(body) {
    // implementation
}
The response structure doesn’t match your parsing logic. Add debug logging:
fn parse_response(body) {
    print(body);  // Print full response to see structure
    // Then adjust parsing logic
}
  • Verify your API URL is correct
  • Check authentication headers are set properly
  • Ensure request format matches what your API expects
  • Test with curl to confirm API access
Your parse_response logic might be extracting the wrong field. Log the full response:
fn parse_response(body) {
    print("Full response: " + body);
    // Adjust field access based on output
}

Testing Your Custom Provider

1

Test with Single Variation

Start with minimal settings to quickly identify issues:
cbl single-turn \
    --threshold 0.5 \
    --variations 1 \
    --maximum-iteration-layers 1 \
    custom --url YOUR_URL --script your-provider.rhai
2

Verify Request Format

Check logs to ensure requests match your API’s expected format. Add print() statements in build_request() if needed.
3

Verify Response Parsing

Check that assistant messages are being extracted correctly. Add print() in parse_response() to debug.
4

Scale Up Testing

Once basic tests work, increase complexity:
cbl multi-turn \
    --threshold 0.5 \
    --max-turns 8 \
    --test-types user_persona,semantic_chunks \
    custom --url YOUR_URL --script your-provider.rhai

Best Practices

Copy one of the provided examples that most closely matches your API format, then modify incrementally.
Use print() liberally during development to see request/response structures.
Before writing your Rhai script, confirm you can successfully call your API with curl.
Add checks for missing or null fields in responses:
fn parse_response(body) {
    let result = body["data"];
    if result == () {
        return "[Error: No data in response]";
    }
    result.to_string()
}
Focus on request transformation and response parsing. Complex logic should live in your API, not the script.

Real-World Use Cases

Internal Model Deployments

Test proprietary models deployed on internal infrastructure

Fine-Tuned Models

Evaluate custom fine-tunes on non-standard endpoints

Research Models

Test experimental models with unique API formats

Multi-Model Routing

Route requests to different models based on custom logic

Next Steps

Rhai Language Documentation

Complete reference for Rhai scripting language

Providers Overview

Learn about OpenAI and Ollama providers