AI-Hub is a cross-platform installer, launcher, and runtime toolkit for creative and conversational AI workflows. It ships safe-by-default shell helpers, schema-driven Python runtimes, curated manifests, and lightweight launchers so newcomers can get Stable Diffusion, KoboldAI, and SillyTavern running with predictable results.
- Platforms: Linux (desktop/headless) with first-class WSL2 and Windows launcher parity.
- Focus: Repeatable installs, GPU-aware defaults, resilient downloads, and transparent runtime helpers.
- Audience: Makers who want a single command to bootstrap AI apps and a single menu/web UI to keep them updated.
- Clone the repo on a supported Linux distro (or WSL2/Ubuntu on Windows).
- From the repo root:
chmod +x install.sh ./install.sh
- The installer defaults to the Web Launcher (the single UX surface). It records logs to
~/.config/aihub/logs/install-YYYYMMDD.logand creates OS-appropriate shortcuts. - Launch again anytime with
./launcher/linux/start_web_launcher.sh(Linux/WSL) orlauncher/windows/start_web_launcher.ps1(Windows). Legacy./launcher/linux/aihub_menu.shnow redirects to the Web Launcher athttp://127.0.0.1:3939.
- Install and open Git Bash (or use WSL2 with Ubuntu) so the repo can be cloned and bash-compatible paths resolve correctly.
- Ensure a Windows package manager is available (
wingetor Chocolatey) for dependency installs triggered by the launcher wrappers. - From a PowerShell terminal in the repo root, run either the batch or PowerShell installer:
.\install.bat # or .\install.ps1
- The installer logs to
%APPDATA%\AIHub\logs\install-YYYYMMDD.log, and shortcuts are created under the Start Menu and Desktop (matching.lnk,.bat, and.ps1wrappers called by the launchers). - Re-launch anytime via
launcher\windows\start_web_launcher.ps1(web UI athttp://127.0.0.1:3939) or the dedicated action wrappers likelauncher\windows\install_webui.ps1andlauncher\windows\run_webui.ps1. Linux instructions above remain unchanged for WSL.
Need a hands-free run?
./install.sh --headless --install webui --gpu nvidiamirrors the guided flow without dialogs. Add--config <file>to feed a JSON/env config (seedocs/headless_config.md).
AI-Hub keeps system-facing logic in bash and workflow logic in Python. The major building blocks are:
System detection, installers, and launch helpers. Key scripts include:
install/– distro-aware bootstrap, dependency checks, GPU detection, and shortcut creation.launch/– start/stop helpers for Stable Diffusion WebUI, KoboldAI, SillyTavern, ComfyUI, and supporting services.filters/– model/LoRA filtering and manifest utilities.helpers/– logging, retries, download wrappers, and configuration readers used by the menu and web launcher.
All new/updated scripts enforce set -euo pipefail, quote variables, and are safe to re-run.
Schema-first runtimes that expose structured JSON workflows used by the web launcher and CLI:
prompt_builder/– Scene-driven prompt compiler with deterministic/heuristic prompt assembly, LoRA call lists, andapply_feedback_to_scenedirectives for iterative refinements.character_studio/– Character card management, dataset prep, captioning/tagging helpers, andapply_feedback_to_characterfor deterministic key/value updates.web_launcher/– HTTP server routes that surface installs, manifests, prompt compilation, and character registry reads to the browser UI. Configurable viaAIHUB_WEB_HOST,AIHUB_WEB_PORT, andAIHUB_WEB_TOKEN/--auth-token.hardware/– GPU/CPU probes surfaced to launchers and logs.audio/andvideo/– multimedia helpers kept separate from install logic.registry.py&models/– typed dataclasses and helpers shared across runtimes.
modules/bootstrap/– workspace prep and common environment checks reused by installers.modules/config_service/– config parsing and persistence for headless runs and launchers.manifests/– JSON metadata for models and LoRAs (hash, size, tags, mirrors, suggested frontends).launcher/– Cross-platform entrypoints: bash, PowerShell, batch, and Python thin wrappers for menus and GPU hints.docs/– Quickstarts, performance flags, roadmap, and launcher notes.
[install.sh or install.ps1]
│
├─► Shell bootstrap (GPU + deps)
│ ├─ validates packages
│ ├─ detects NVIDIA/AMD/Intel/CPU
│ └─ creates shortcuts + logs
│
└─► Launcher choice
└─ Web Launcher (launcher/linux/start_web_launcher.sh)
└─ HTTP routes → Python runtimes → manifests/config
[Web Launcher / Menu action]
│
├─ Install target (webui/kobold/sillytavern/loras/models)
│ └─ shell installers + manifests + workspace prep
│
├─ Run target
│ └─ shell launchers (respecting GPU flags, low VRAM, DirectML)
│
├─ Prompt Builder
│ └─ POST scene JSON → prompt_builder compiler → structured prompt output
│
└─ Character Studio
└─ card/dataset/tagging helpers → JSON responses and logs
- Start with
./launcher/linux/start_web_launcher.sh(Linux/WSL) or the matching PowerShell/Batch/macOS wrappers inlauncher/windowsandlauncher/linux. - Defaults to
http://127.0.0.1:3939; override host/port withAIHUB_WEB_HOST/AIHUB_WEB_PORT. - Protect APIs with
AIHUB_WEB_TOKENor--auth-token. - The Web Launcher UI already includes Guided Scene Builder + Quick Prompt for Prompt Builder and a Character Studio page for cards, datasets, and tag helpers.
- Surfaced routes include install triggers, manifest browsing, prompt compilation (with history saved to
~/.cache/aihub/prompt_builder/prompt_history.json), character registry reads, and job logs. Prompt bundles are written to~/.cache/aihub/prompt_builder/prompt_bundle.jsonunlessPROMPT_BUNDLE_PATHis set.
./launcher/linux/aihub_menu.shnow redirects to the Web Launcher to keep Linux UX aligned with other platforms.
- Headless install:
./install.sh --headless --gpu <nvidia|amd|intel|cpu> --install <webui|kobold|sillytavern|loras|models> - Use
--config <file>for repeatable unattended runs (JSON or env-style). Seedocs/headless_config.md. - After install, re-run launchers directly (e.g.,
./modules/shell/run_webui.sh,./modules/shell/run_kobold.sh) or use menu buttons. On Windows, the matching.ps1/.batwrappers are available (for example,launcher\windows\install_webui.ps1,launcher\windows\run_kobold.bat,launcher\windows\health_summary.ps1).
- Performance flags: FP16 defaults on NVIDIA; xFormers is offered for NVIDIA; DirectML toggles apply on Windows/WSL for AMD/Intel; low-VRAM mode adds
--medvramfor WebUI. Details indocs/performance_flags.md. - GPU guidance: Detected GPUs are logged and surfaced during install; AMD notes point to ROCm; Intel notes point to oneAPI/OpenVINO; CPU mode remains available.
- Shortcuts: Linux
.desktop, Windows.lnk/.bat/.ps1, macOS.command/app bundle. Locations and cleanup steps indocs/shortcuts.md. - Logs: All installers and launchers write to
~/.config/aihub/logs/install-YYYYMMDD.log(or%APPDATA%\AIHub\logs\install-YYYYMMDD.logon Windows). Menu/web flows reuse the same log for troubleshooting. - Environment variables:
AIHUB_WEB_HOST/AIHUB_WEB_PORT– bind address/port for web launcher.AIHUB_WEB_TOKEN– bearer token required by web APIs.AIHUB_PYTHON– override Python interpreter for Windows wrappers.AIHUB_LOG_PATH– custom log destination when needed.AIHUB_LOG_DIR– custom log directory for wrapper/log helpers.
- Base models live in
$HOME/ai-hub/models/; SD v1.5 is fetched by default. LoRAs and curated presets land in~/AI/LoRAs. - Manifests list hashes, sizes, mirrors, tags, and frontend hints to keep downloads predictable.
- The Model and LoRA quickstart covers SD1.5/SDXL presets, download locations, and pairing flows across WebUI, KoboldAI, and SillyTavern.
- Missing packages? Re-run
./install.sh(it will prompt before installing and retries gracefully if you cancel). - Slow downloads? Provide a Hugging Face token when prompted so
aria2c/wgetcan use authenticated mirrors. - No GPU detected? Continue with CPU mode; expect slower inference.
- Desktop icon missing? Verify
${XDG_DATA_HOME:-$HOME/.local/share}/applications/ai-hub-launcher.desktopexists and your DE trusts local.desktopfiles on~/Desktop. - Permission issues? Ensure your user can run
sudofor package installs.
Contributions are welcome! Keep bash helpers small and idempotent, avoid wrapping imports in try/except, and mirror Python style (type hints + pathlib). Open a PR with focused changes and matching docs/tests where relevant.
This project is licensed under the terms of the LICENSE file.