Skip to content

Commit 9eef072

Browse files
committed
feat(embeddings): add OPENAI_BASE_URL support for custom endpoints
Add support for custom OpenAI-compatible API endpoints via OPENAI_BASE_URL environment variable. This enables using: - Ollama for local LLM inference - LiteLLM Proxy for unified model access - Groq, OpenRouter, and other OpenAI-compatible providers - Self-hosted models (vLLM, text-generation-inference) Changes: - Read OPENAI_BASE_URL from environment in DEFAULT_EMBEDDING_CONFIG - Update README.md with configuration documentation - Update CHANGELOG.md with feature entry Fixes #70
1 parent b0c2d04 commit 9eef072

3 files changed

Lines changed: 9 additions & 1 deletion

File tree

CHANGELOG.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,11 @@
11
# Changelog
22

3+
## [Unreleased]
4+
5+
### Features
6+
7+
* **embeddings:** Add `OPENAI_BASE_URL` support for custom OpenAI-compatible endpoints (e.g., Ollama, LiteLLM) ([#70](https://github.com/PatrickSys/codebase-context/issues/70))
8+
39
## [1.8.2](https://github.com/PatrickSys/codebase-context/compare/v1.8.1...v1.8.2) (2026-03-05)
410

511

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -346,6 +346,7 @@ Structured filters available: `framework`, `language`, `componentType`, `layer`
346346
| ------------------------ | -------------------------- | --------------------------------------------------------------------------------------------- |
347347
| `EMBEDDING_PROVIDER` | `transformers` | `openai` (fast, cloud) or `transformers` (local, private) |
348348
| `OPENAI_API_KEY` | - | Required only if using `openai` provider |
349+
| `OPENAI_BASE_URL` | `https://api.openai.com/v1` | Custom OpenAI-compatible API endpoint (LiteLLM, Groq, OpenRouter, etc.) |
349350
| `CODEBASE_ROOT` | - | Project root (CLI arg takes precedence) |
350351
| `CODEBASE_CONTEXT_DEBUG` | - | Set to `1` for verbose logging |
351352
| `EMBEDDING_MODEL` | `Xenova/bge-small-en-v1.5` | Local embedding model override (e.g. `onnx-community/granite-embedding-small-english-r2-ONNX` for Granite) |

src/embeddings/types.ts

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,5 +37,6 @@ export const DEFAULT_EMBEDDING_CONFIG: EmbeddingConfig = {
3737
model: DEFAULT_MODEL,
3838
batchSize: 32,
3939
maxRetries: 3,
40-
apiKey: process.env.OPENAI_API_KEY
40+
apiKey: process.env.OPENAI_API_KEY,
41+
apiEndpoint: process.env.OPENAI_BASE_URL || 'https://api.openai.com/v1'
4142
};

0 commit comments

Comments
 (0)