Skip to content
#

llm-security-compliance-prompt-injection

Here are 13 public repositories matching this topic...

A comprehensive reference for securing Large Language Models (LLMs). Covers OWASP GenAI Top-10 risks, prompt injection, adversarial attacks, real-world incidents, and practical defenses. Includes catalogs of red-teaming tools, guardrails, and mitigation strategies to help developers, researchers, and security teams deploy AI responsibly.

  • Updated Apr 3, 2026

Basilisk — Open-source AI red teaming framework with genetic prompt evolution. Automated LLM security testing for GPT-4, Claude, Grok, Gemini. OWASP LLM Top 10 coverage. 32 attack modules.

  • Updated Apr 24, 2026
  • Python

Detect and sanitize prompt injection attacks in Rails apps. Protects against direct injection (users hacking your LLMs via form inputs) and indirect injection (malicious prompts stored for other LLMs to scrape). ~70 detection patterns across 7 attack categories with configurable sensitivity levels. Now includes resource extraction detection pattern

  • Updated Feb 25, 2026
  • Ruby

🛡️ Explore tools for securing Large Language Models, uncovering their strengths and weaknesses in the realm of offensive and defensive security.

  • Updated Apr 24, 2026

Risk-Aware Introspective RAG (RAI-RAG) is a safety-aligned RAG framework integrating introspective reasoning, risk-aware retrieval gating, and secure evidence filtering to build trustworthy, robust, and secure LLM and agentic AI systems.

  • Updated Mar 7, 2026
  • Python

Improve this page

Add a description, image, and links to the llm-security-compliance-prompt-injection topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the llm-security-compliance-prompt-injection topic, visit your repo's landing page and select "manage topics."

Learn more