Make use of Intel Arc Series GPU to Run Ollama, StableDiffusion, Whisper and Open WebUI, for image generation, speech recognition and interaction with Large Language Models (LLM).
-
Updated
Apr 10, 2026 - Dockerfile
Make use of Intel Arc Series GPU to Run Ollama, StableDiffusion, Whisper and Open WebUI, for image generation, speech recognition and interaction with Large Language Models (LLM).
🎨ComfyUI standalone pack for Intel GPUs. | 英特尔显卡 ComfyUI 整合包
KDE Plasma Widget that displays Intel GPU usage including whether or not video acceleration is being used.
a bunch of scripts that i made to automate your task and get the best battery life
Docker build scripts for TornadoVM on GPUs: https://github.com/beehive-lab/TornadoVM
GPU-accelerated LLaMA inference wrapper for legacy Vulkan-capable systems a Pythonic way to run AI with knowledge (Ilm) on fire (Vulkan).
Local-first subtitle workstation for speech-to-text, translation, review, and export.
Intel LevelZero JNI library for TornadoVM
🚀 Intel GPU Support in TensorFlow & PyTorch This repo provides a quick guide to install, configure, and optimize TensorFlow & PyTorch for Intel GPUs (Intel Arc, Data Center GPU Max)
Pure-Go GPU/NPU compute wrapper for Intel NEO driver Level Zero APIs.
Benchmark results and performance data for the Intel Arc Pro B70 GPU (Xe2/Battlemage) - LLM inference, video generation, dual-GPU scaling.
Выпускная Квалификационная Работа МГУ 2025
Qwen2-7B function calling demo with Intel dGPU accelerating
BOINC Client Setup on Kubernetes
🔍♟️ Senchess AI - Production-ready chess piece detection powered by YOLOv8
Gnome Shell Extension to show 3D Rendering Load, Video Acceleration Load, and Power Usage of Intel GPUs
Add a description, image, and links to the intel-gpu topic page so that developers can more easily learn about it.
To associate your repository with the intel-gpu topic, visit your repo's landing page and select "manage topics."