A modern food ordering web application with AI-powered menu assistance, featuring React frontend, Node.js backend, and PostgreSQL database.
- π± Browse sushi menu with beautiful UI
- π Add items to cart
- π¦ Place orders
- πΎ Session storage - customer information retained across orders for quick reordering
- β¨ Advanced form validation - onBlur field-level error checking with real-time visual feedback
- π€ AI-powered chat assistant with RAG and a multi-tool LangChain agent
- π Semantic search - find items by description, ingredients, dietary preferences
- π§ Multi-tool AI agent - autonomous tool selection for complex queries (price filter, semantic search, item details)
- π― Deterministic answers for constrained queries - RAG path parses price/category/spice from natural language and returns a consistent, sorted list when constraints apply (parity with debug tooling)
- π¬ Conversational interface - natural language menu recommendations
- ποΈ Vector database - ChromaDB for semantic similarity search
- π³ Docker-based development environment
- β Automatic Docker health checks (with retries while containers become healthy)
- π§Ή
cleanup:docker/dev:clean- avoids stale container name conflicts beforedocker-compose up - π§ͺ Test coverage thresholds - backend (Jest) and frontend (Vitest) enforce β₯80% global coverage when you run
npm run test:coverage - β±οΈ Performance monitoring (OpenAI & PostgreSQL query timing)
Browse the menu, add items to cart, and view your order summary (click to enlarge)
Seamless checkout with customer information saved for future orders (click to enlarge)
Chat with the AI assistant for personalized menu recommendations and semantic search (click to enlarge)
- React 18
- Vite
- Tailwind CSS
- Axios
- Browser Session Storage (customer data persistence)
- Node.js
- Express
- PostgreSQL
- OpenAI API
- Docker & Docker Compose
- Concurrently for dev workflow
- Vector Database: ChromaDB for semantic search with cosine similarity
- Embeddings: OpenAI
text-embedding-3-small(1536 dimensions) - RAG: Retrieval from ChromaDB + GPT-4 synthesis; constrained queries use parsed filters + deterministic formatting (see
docs/03_AI_AGENT_AND_TOOLS.md) - Agent: LangChain
createAgentwith Zod-typed tools (search_menu,filter_by_price,get_item_details) and GPT-4 - LLM: GPT-4 for agent chat and RAG answers; GPT-3.5 for menu JSON generation
This application has three main feature areas, each with its own architecture diagram:
- RAG System Architecture - Shows how the AI assistant processes natural language queries
- Order Flow Diagram - Shows how customers place orders
- Testing Architecture - Shows the comprehensive test suite structure (see Testing section below)
This is the main technical diagram showing the complete AI workflow from user query to response, including vector search and LLM generation.
Note: This diagram shows the RAG architecture with readable text rendering.
The rag-architecture-dark.png diagram was created from the Mermaid-generated SVG (rag-architecture-dark.svg) using the following process:
- Source Files:
docs/images/rag-architecture-dark.svg- The source Mermaid SVG diagramdocs/images/view-svg.html- HTML viewer for rendering the SVG
- Conversion Process:
- Open the SVG using the HTML viewer in a browser
- Use browser "Print to PDF" functionality
- Open the PDF in macOS Preview (shows multiple pages/panes)
- Remove blank panes and export each visible pane as a separate PNG (e.g.,
arch-1.png,arch-2.png,arch-3.png) - Combine the PNGs using ImageMagick:
montage arch-1.png arch-2.png arch-3.png -tile 1x3 -geometry +0+0 -background black rag-architecture-dark.png
Why This Approach? Mermaid SVG diagrams with CSS styling are complex. Converting to a multi-page PDF preserves text rendering, and ImageMagick's montage command efficiently combines the exported PNGs into a final diagram.
Note: This is a separate flow showing the ordering system (menu browsing and checkout), which is independent from the RAG system shown above.
π Order Flow Diagram (click to expand)
graph TB
subgraph "User Journey"
A[Browse Menu] --> B[Add Items to Cart]
B --> C[View Cart]
C --> D[Click Place Order]
end
subgraph "Order Form Component"
D --> E{Check Session Storage}
E -->|First Time User| F[Empty Form]
E -->|Returning User| G[Pre-filled Form]
G -->|Load from sessionStorage| H[Name, Phone Auto-filled]
F --> I[User Fills Form]
H --> J[User Verifies Info]
I --> K[Enter Credit Card]
J --> K
end
subgraph "Order Submission"
K --> L[Submit Order]
L --> M[POST /api/orders]
M --> N[PostgreSQL Database]
N --> O[Order Saved]
end
subgraph "Session Storage Update"
O --> P[Save to sessionStorage]
P --> Q{Save Customer Info}
Q -->|Save| R[firstName, lastName, phone]
Q -->|Don't Save| S[creditCard - Security]
R --> T[sessionStorage.setItem]
end
subgraph "Next Order"
T --> U[Order Confirmation]
U --> V[User Adds More Items]
V --> D
end
style E fill:#004080
style P fill:#004d00
style S fill:#660033
style T fill:#663300
Key Features:
- First Order: User enters all information (name, phone, credit card)
- Subsequent Orders: Name and phone are pre-filled from session storage
- Security: Credit card is never stored - must be re-entered each time
- Session Scope: Data persists only for current browser session (cleared on tab/browser close)
- Privacy First: Uses
sessionStorageinstead oflocalStoragefor better privacy
1. Vector Store (ChromaDB)
- Stores menu items as 1536-dimensional embeddings
- Enables semantic search: "spicy vegetarian options" matches relevant items
- Cosine similarity for relevance ranking
- Sub-100ms query latency
2. RAG Pipeline
- Retrieval: Query β Embedding β Vector Search β Top-K documents
- Augmentation: Inject retrieved context into LLM prompt
- Generation: GPT-4 generates response grounded in actual menu data
3. Agent (LangChain)
- Autonomous tool selection: Model chooses
search_menu,filter_by_price, and/orget_item_details - Structured tools: Zod schemas for tool inputs; price filter: inclusive min, exclusive max (e.g. βunder $10β β
max: 10meansprice < 10) - History: Frontend sends prior turns to
POST /api/assistant/chatwhen the agent is initialized - UI: βAgentβ badge when the multi-tool agent is online; otherwise the assistant falls back to RAG
ask
4. Session Storage for Customer Data
- Automatic Persistence: Customer information (name, phone) saved after first order
- Quick Reordering: Pre-fills form for subsequent orders in same session
- Security by Design: Credit card never stored - must be re-entered
- Privacy Conscious: Uses
sessionStorage(cleared on browser close) notlocalStorage - Implementation: React
useEffecthook loads saved data on component mount - User Experience: Seamless repeat ordering without re-entering personal info
5. Advanced Form Validation
- onBlur Validation: Field-level checking when user exits input (tab or click away)
- Consolidated Error Messages: Multiple validation rules combined into single user-friendly message
- Real-time Visual Feedback: Invalid fields display red border and error text below input
- Smart Button Control: Submit button disabled automatically when any field has errors
- Validation Rules:
- Required fields: name, last name, phone (exactly 10 digits), credit card (13-16 digits)
- Format checking: phone numbers and credit cards
- Length constraints with intelligent error messaging
- User Experience: Immediate feedback without requiring form submission attempt
- Implementation: Custom React validation system with field-level state management
6. Testing & Quality Assurance
- Automated suite: Jest (backend) + Vitest (frontend); run
npm testfor the full run - Backend: Orders API, assistant routes, RAG/agent/vector/menu services, parity checks for constrained queries
- Frontend:
Appintegration (menu, cart, checkout),OrderForm,AIAssistant, cart/menu components - Coverage gate:
npm run test:coverageenforces β₯80% statements/branches/functions/lines (global) in both packages - Error handling: Order flow errors (validation, DB, network, duplicates) aligned with UI behavior
7. Example Flow - AI Assistant
User: "Show me spicy vegetarian options under $15"
Step 1: Agent analyzes query
β Needs: semantic search + price filter
Step 2: Tool Calls
β search_menu("spicy vegetarian")
β filter_by_price(15)
Step 3: Vector Search
β Generate embedding for "spicy vegetarian"
β ChromaDB returns top 5 matches (~80ms)
β Filter results by price < $15
Step 4: Response
β **Agent path**: GPT-4 composes the reply from tool results (may chain multiple tools).
β **RAG path** (`/ask`): If the question implies price/category/spice constraints, the app may return a **fixed sorted list** without LLM synthesis for consistency.
Total time varies with model latency and number of tool calls.
- Node.js 16+ installed
- Docker Desktop installed and running (the app will check automatically)
- OpenAI API key (optional, for AI features)
# Clone the repository
cd /Users/sbecker11/workspace-sushi/sushi-rag-app
# Install all dependencies (root, backend, frontend)
npm run install-all
# Create .env file from template
cp env.example .env
# Set up database (one-time setup)
npm run db:setup
# Start the application - this does everything!
# (kills old processes, starts Docker, checks health, starts servers)
npm run devThe app will be available at:
- Frontend: http://localhost:5173
- Backend API: http://localhost:3001 (see
PORTin.env; frontend usesVITE_API_URLif set)
Create a .env file in the root directory (not in backend):
# Backend Configuration
PORT=3001
# PostgreSQL Docker Container Configuration
POSTGRES_CONTAINER=sushi-rag-app-postgres
POSTGRES_USER=sushi_rag_app_user
POSTGRES_PASSWORD=sushi_rag_app_password
POSTGRES_DB=sushi_rag_app_orders
POSTGRES_HOST=localhost
POSTGRES_PORT=5432
# OpenAI Configuration (required for AI features)
OPENAI_API_KEY=your_openai_api_key_here
# ChromaDB Configuration (Vector Database)
CHROMA_HOST=localhost
CHROMA_PORT=8000
# Frontend URL (for CORS)
FRONTEND_URL=http://localhost:5173
# Performance Monitoring
# Set to 'true' to enable performance timing logs, 'false' to disable
ENABLE_PERFORMANCE_LOGGING=trueYou can copy from the example:
cp env.example .env
# Then edit .env and add your OpenAI API keyNote: AI features require an OpenAI API key. You can create a new one at https://platform.openai.com/api-keys.
To sanity-check the key from the repo root:
npm run test:openainpm run dev # π Start everything (cleanup, ports, Docker, health, db:setup, both servers + browser)
npm run dev:clean # Same as dev after explicit kill:ports + cleanup:docker (useful after conflicts)
npm run server # Start backend only (also runs prestart checks)
npm run client # Start frontend only (also runs prestart checks)
npm run prestart # Run all pre-flight checks (ports, Docker cleanup, docker up, health, db:setup)npm run docker:up # Start Docker services
npm run docker:down # Stop Docker services
npm run docker:reset # Reset Docker services (removes data)npm run db:setup # Initialize database schemanpm run check:docker # Full Docker and services health check
npm run check:docker-daemon # Quick check if Docker Desktop is running
npm run cleanup:docker # Remove stale app containers (e.g. postgres/chromadb) before compose
npm run kill:ports # Kill processes on ports 3001 and 5173
npm run install-all # Install all dependencies
npm run test:coverage # Backend + frontend tests with coverage (must meet 80% thresholds)
npm run test:mcp # Python MCP unit tests (pytest under mcp/, auto venv)
npm run test:openai # Quick script to verify OPENAI_API_KEY (see script output)The app includes automatic health checks that run before starting. This prevents confusing errors if Docker isn't running.
- β Docker Desktop is running
- β Required services (PostgreSQL) are running
- β Services are healthy and ready
When everything is ready:
========================================
Docker & Services Check
========================================
π Checking if Docker Desktop is running...
β
Docker Desktop is running
π Checking required services...
β
Service "sushi-rag-app-postgres" is running and healthy
β
All checks passed! Starting application...
When Docker is not running:
β Docker Desktop is NOT running!
Please start Docker Desktop and try again.
You can start it by:
- Opening Docker Desktop from Applications
- Or running: open -a Docker
For more details, see Docker Workflow Guide
The app includes built-in performance monitoring for OpenAI API calls and PostgreSQL queries.
Control performance logging via the .env file:
# Enable performance timing logs
ENABLE_PERFORMANCE_LOGGING=true
# Disable performance timing logs (for production)
ENABLE_PERFORMANCE_LOGGING=falseWhen enabled, you'll see timing metrics in the backend console:
π€ Calling OpenAI API to generate menu...
β±οΈ OpenAI LLM Response Time: 3247ms
β
Generated menu from OpenAI LLM
β±οΈ PostgreSQL: Fetch all orders - 12ms
β±οΈ PostgreSQL: Fetch items for 3 orders - 8ms
β±οΈ PostgreSQL: BEGIN transaction - 2ms
β±οΈ PostgreSQL: INSERT order - 15ms
β±οΈ PostgreSQL: INSERT 3 order items - 7ms
β±οΈ PostgreSQL: COMMIT transaction - 3ms
β±οΈ PostgreSQL: Total transaction time - 27ms
- Development: Enable to monitor performance and identify bottlenecks
- Production: Disable to reduce log noise
- Debugging: Enable temporarily to diagnose slow queries
sushi-rag-app/
βββ backend/
β βββ config/database.js
β βββ database/setup.js
β βββ routes/ # menu, orders, assistant (+ tests)
β βββ services/ # menuService, vectorStore, ragService, agentService
β βββ server.js
βββ frontend/
β βββ src/
β β βββ components/ # Header, MenuGrid, MenuItem, Cart, OrderForm, AIAssistant, β¦
β β βββ App.jsx
β β βββ main.jsx
β βββ index.html
βββ docs/ # Numbered guides (see Documentation below)
βββ infra/
β βββ local/README.md # Local Docker profile notes
β βββ aws/ # Fargate-oriented notes + Terraform scaffold
βββ mcp/ # Python MCP server (Cursor / Claude Desktop) β same Chroma as app
βββ scripts/ # Docker checks, port kill, OpenAI key smoke test, β¦
βββ docker-compose.yml # PostgreSQL + ChromaDB
βββ env.example
βββ package.json # Root orchestration (dev, test, coverage)
The mcp/ folder runs a local Model Context Protocol server against your Chroma collection (sushi_menu by default). It is not part of the production web stack. Setup: mcp/README.md.
GET /api/menu- Get menu items (OpenAI-generated JSON, with cache and static fallback)
POST /api/orders- Create new orderGET /api/orders- Get all ordersGET /api/orders/:id- Get order details
POST /api/assistant/chat- Multi-tool agent chat (history supported)POST /api/assistant/ask- RAG Q&A (constrained queries use deterministic listing when applicable)POST /api/assistant/search- Semantic search over the vector indexPOST /api/assistant/debug- Inspect inferred constraints + selected items (development / parity)GET /api/assistant/status-{ vectorStore, rag, agent }readiness flags
Docker Desktop not running:
# Start Docker Desktop
open -a Docker
# Wait for it to be ready, then try again
npm run devServices not starting:
# Check service logs
docker-compose logs
# Reset services
npm run docker:resetContainer name conflict (βname already in useβ):
# Prestart runs cleanup; you can also run manually:
npm run cleanup:docker
npm run docker:up
# Or a full clean dev start:
npm run dev:cleanPort conflicts:
# Check what's using the port
lsof -i :5432 # PostgreSQL (host mapping from compose)
lsof -i :3001 # Backend API
lsof -i :8000 # ChromaDB (default)
lsof -i :5173 # Frontend (Vite)Connection errors:
# Verify PostgreSQL is running
docker ps | grep sushi-rag-app-postgres
# Check logs
docker logs sushi-rag-app-postgres
# Reinitialize database
npm run docker:reset
npm run db:setupOpenAI / AI assistant errors (401, βincorrect API keyβ):
- Set
OPENAI_API_KEYin the root.env(backend loads from there). - Restart the backend after changing the key.
- Use
npm run test:openaior checkGET /api/assistant/status(and backend logs).
Module not found or module type warnings:
# Reinstall dependencies
npm run install-all
# Note: The project uses ES modules ("type": "module" in package.json)
# This eliminates warnings about module syntaxPort already in use:
# Automatic cleanup of all app ports (recommended)
npm run kill:ports
# Or manually kill specific port
lsof -ti:3001 | xargs kill -9 # Backend
lsof -ti:5173 | xargs kill -9 # Frontend-
Install dependencies (first time only):
npm install
-
Start everything with one command:
npm run dev
This automatically:
- π§Ή Cleans ports and removes stale app containers, then starts Docker services
- π Checks Docker Desktop and waits for Postgres/Chroma health when needed
- ποΈ Runs
db:setupagainst the dev database - π Starts backend (3001), frontend (5173), and opens the browser (script)
-
Make changes:
- Frontend: Changes hot-reload automatically
- Backend: Nodemon restarts server on file changes
-
Stop everything:
# Stop app: Ctrl+C # Stop Docker: npm run docker:down
- Backend (Jest): Orders API, menu/assistant routes,
ragService,agentService,vectorStore,menuService, constrained-query parity tests. - Frontend (Vitest + Testing Library):
Appflows (menu load, cart, checkout, errors),OrderForm,AIAssistant, and presentational components (Header,MenuGrid,Cart, etc.). - Coverage policy:
npm run test:coverageruns Jest and Vitest with global thresholds of 80% (statements, branches, functions, lines) for each package.
Approximate counts (run npm test for the live total):
| Package | Tests (typical) |
|---|---|
| Backend | ~96 |
| Frontend | ~62 |
| Total | ~158 |
npm test # Backend then frontend
npm run test:backend # Jest only
npm run test:frontend # Vitest only
npm run test:watch # Both in watch mode (concurrently)
npm run test:coverage # Full suite + coverage (enforces 80% thresholds)Reports: HTML/text under backend/coverage/ and frontend/coverage/ (the latter is gitignored).
- Testing Guide β setup, patterns, and troubleshooting
π§ͺ Testing architecture (high level)
graph LR
T[npm test] --> B[Jest backend]
T --> F[Vitest frontend]
B --> O[orders / menu / assistant routes]
B --> S[services: RAG, agent, vector, menu]
F --> A[App integration]
F --> C[components + OrderForm]
- Local: Docker Compose for Postgres + ChromaDB; Node processes for API and Vite (this repoβs default workflow). See Deployment profiles and
infra/local/README.md. - AWS (scaffold): Fargate-oriented notes and a Terraform starting point under
infra/aws/terraform/(VPC, ALB, ECS/Fargate, RDS, Secrets Manager β adjust before production use).
| Doc | Description |
|---|---|
| 00_SETUP.md | Install, .env, OpenAI, URLs, troubleshooting |
| 01_DOCKER_WORKFLOW.md | Docker startup, health checks, cleanup |
| 02_TESTING.md | Jest/Vitest, coverage, CI-oriented notes |
| 03_AI_AGENT_AND_TOOLS.md | Agent tools, RAG vs agent, price bounds, semantics |
| 04_QUERY_EXAMPLES.md | Example prompts (UI + MCP-style usage) |
| 05_DEPLOYMENT_PROFILES.md | Local vs AWS Fargate, Bedrock notes, checklist |
| mcp/README.md | Cursor / Claude MCP server (Python, Chroma + OpenAI) |
Index: docs/DOCUMENTATION_STRUCTURE.md
Archive: docs/archive/ β historical implementation notes
- Create a feature branch
- Make your changes
- Test thoroughly
- Submit a pull request
MIT
For issues or questions:
- Check the troubleshooting section
- Review the documentation
- Check Docker and service status:
npm run check:docker
Made with β€οΈ and π£