This repository contains reference implementations for integrating F5 Guardrails into applications. It showcases both inline (scan before the LLM runs) and out-of-band (scan both user input and model output) patterns across Python scripts and Streamlit demos.
Help developers understand how to use F5 Guardrails's API to:
- Secure prompts and responses from large language models (LLMs)
- Integrate real-time or post-processing moderation
- Protect against prompt injection, PII leaks, and unsafe content
These examples are aimed at general developers looking to:
- Experiment with F5 Guardrails API features
- Integrate F5 Guardrails moderation into GenAI Applications
- Understand inline vs out-of-band scanning
| Path | Technique | What it Demonstrates |
|---|---|---|
examples/prompt_api_inline.py |
Inline | Call F5 Guardrails PromptAPI, forward to a chosen provider only if cleared. |
examples/scans_api_out_of_band.py |
Out-of-band | Scan prompts with ScanAPI, optionally forward the safe ones to OpenAI. |
examples/chatbot_inline.py |
Inline (Streamlit) | One-click chatbot that routes prompts through PromptAPI before calling OpenAI. |
examples/chatbot_inline_multi_model.py |
Inline + multi-provider | Streamlit chatbot that lets you pick providers (e.g. GPT-4o, BioNeMo) and scans inline. |
examples/chatbot_out-of-band.py |
Out-of-band (Streamlit) | Scan both user prompts and OpenAI responses via ScanAPI. |
examples/chatbot_out-of-band_multi_model.py |
Out-of-band + multi-provider | Streamlit chatbot that scans input/output while switching between OpenAI and NVIDIA endpoints. |
examples/extract_log_data.py |
Data export | Utility script to pull historical scanner logs into CSV for offline analysis. |
git clone https://gitlab.com/ahealy-calypsoai/calypsoai-api-integration-examples.git
cd calypsoai-api-integration-examplesPrefer Conda? Create and activate a new environment with
conda create -n calypsoai-examples python=3.11andconda activate calypsoai-examples.
These examples are intentionally lightweight. Install dependencies from requirements.txt:
pip install -U -r requirements.txtMost scripts expect F5 Guardrails plus whichever downstream provider you plan to call:
export F5_GUARDRAILS_API_TOKEN="your_calypso_api_key" # PromptAPI + most Streamlit demos
export OPENAI_API_KEY="your_openai_api_key" # Needed for OpenAI-backed chatbots/scripts
export NVIDIA_API_KEY="your_nvidia_api_key" # Needed only for the NVIDIA multi-model demoTip: drop the same values into an
.envfile and the scripts will auto-load them viapython-dotenv.
Each script lives under examples/. Run them with plain Python once your env vars are set.
# Call PromptAPI inline and print provider response if cleared
python examples/prompt_api_inline.py
# Call ScanAPI to vet prompts before forwarding to OpenAI
python examples/scans_api_out_of_band.py
# Export historical logs to CSV (see --help for options)
python examples/extract_log_data.py "17/10/25 00:00:00" "17/10/25 23:59:59" \
--out prompt_logs_oct17.csv --max-records 100Launch any of the Streamlit demos from the repo root. Keep the terminal open while you test in the browser.
# Inline PromptAPI moderation with OpenAI
streamlit run examples/chatbot_inline.py
# Inline PromptAPI moderation with provider picker (OpenAI / BioNeMo)
streamlit run examples/chatbot_inline_multi_model.py
# Out-of-band ScanAPI moderation (prompt + response)
streamlit run examples/chatbot_out-of-band.py
# Out-of-band with OpenAI/NVIDIA selector (requires NVIDIA_API_KEY)
streamlit run examples/chatbot_out-of-band_multi_model.pyEach app walks through the pattern it demonstrates:
- Inline chatbots call F5 Guardrails before sending a prompt to the downstream LLM.
- Out-of-band chatbots scan both user input and the model’s reply before displaying it.
The exporter accepts day/time ranges in UTC (DD/MM/YY HH:MM:SS) and a handful of optional arguments:
python examples/extract_log_data.py \
"01/10/25 00:00:00" \
"01/10/25 23:59:59" \
--max-records 50 \
--only-user \
--out logs_oct01_me.csvUse smaller pulls while testing (--max-records) and remove the cap when you’re satisfied.
| Pattern | When to Use | Try It |
|---|---|---|
| Inline moderation | You want F5 Guardrails to decide if a prompt should reach the provider at all. | prompt_api_inline.py, chatbot_inline.py, chatbot_inline_multi_model.py |
| Out-of-band moderation | You want the downstream model’s answer but still need F5 Guardrails to validate both sides. | scans_api_out_of_band.py, chatbot_out-of-band.py, chatbot_out-of-band_multi_model.py |
- 401 / 403 responses: confirm the F5 Guardrails token you’re using has access to the ScanAPI/PromptAPI routes and that the correct env var is set.
- Streamlit can’t see your env vars: ensure you exported them in the same shell before running
streamlit run …, or place them in a.envfile. - NVIDIA demo errors: double-check
NVIDIA_API_KEYand verify your account has access to the model listed inchatbot_out-of-band_multi_model.py. - SSL issues behind strict firewalls: set
REQUESTS_CA_BUNDLEor use a corporate proxy that trusts Calypso endpoints.
Copyright F5, Inc. 2026. Licensed under the Apache License, Version 2.0. See LICENSE.
Have ideas for additional integrations? Open an issue or MR and we’ll add them to the gallery. Cheers! 🎉