Skip to content
#

private-ai

Here are 75 public repositories matching this topic...

Run Claude Code 100% on-device with local AI on Apple Silicon. MLX-native Anthropic-API server, 65 tok/s Qwen 3.5 122B, Llama 3.3 70B, Gemma 4 31B. Private, offline, airgap-ready. Built for NDA / legal / healthcare workflows.

  • Updated Apr 22, 2026
  • Python

An advanced, fully local, and GPU-accelerated RAG pipeline. Features a sophisticated LLM-based preprocessing engine, state-of-the-art Parent Document Retriever with RAG Fusion, and a modular, Hydra-configurable architecture. Built with LangChain, Ollama, and ChromaDB for 100% private, high-performance document Q&A.

  • Updated Aug 11, 2025
  • Python

The Private AI Setup Dream Guide for Demos automates the installation of the software needed for a local private AI setup, utilizing AI models (LLMs and diffusion models) for use cases such as general assistance, business ideas, coding, image generation, systems administration, marketing, planning, and more.

  • Updated Apr 22, 2026
  • Shell

Local LLM inference server for Qwen models on Apple Silicon. Private, offline-first AI development with OpenCode integration. Or alternatively: Run Qwen models locally on Mac with MLX. Works offline, no subscriptions, full privacy. Integrates with OpenCode for AI-powered development.

  • Updated Apr 21, 2026
  • Shell

Improve this page

Add a description, image, and links to the private-ai topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the private-ai topic, visit your repo's landing page and select "manage topics."

Learn more