This prototype demonstrates how an Agentic AI Orchestrator can empower next-generation Software-Defined Vehicles (SDVs) to understand a driver’s physiological state and environmental context to recommend safe, explainable interventions.
It aggregates three heterogeneous sources—biometric signals, environmental data, and vehicle telemetry—into a structured VehicleContext ontology. A Large Language Model-based Orchestrator reasons over this multi-modal context to propose interventions. Crucially, the system features hardware-aware routing logic that estimates reasoning complexity, dynamically deciding whether to execute on Edge compute (In-Vehicle High-Performance Compute SoC) or offload to the Cloud, reflecting real-world automotive latency and connectivity constraints.
Modern vehicles collect rich data, but driver-state and context are often underused for safety-critical decisions. This prototype explores a fundamental challenge for the SDV era: how to build an context-aware intelligence that is auditable, multi-modal, and hardware-conscious.
Key goals include:
- Detecting impaired driving: Identifying early signs of driver stress or fatigue using ECG/EIT-derived features and proxies.
- Proposing safe, explainable behaviors: Generating auditable intervention plans (e.g., "Engage L3 Automation," "Offer medical rerouting").
- Optimizing Edge vs. Cloud compute: Respecting real-world embedded compute limitations by offloading complex reasoning only when necessary.
This system is built around a layered architecture:
A Pydantic-based VehicleContext model serves as the single source of truth, fusing three distinct domains:
biometric_data: Proxy features derived from ECG/EIT (Heart Rate, RMSSD/HRV, Stress Index).environmental_data: Surrounding context (Weather, Visibility, Traffic Density, Road Type).vehicle_state: Critical telemetry (Speed, Fuel, Mechanical Health, Automation Availability, Distance to Hospital).
A LangChain ReAct-style agent accepts natural language driver queries (e.g., "I feel lightheaded, help me."). It then uses a reasoning loop (Thought -> Action -> Observation) to intelligently call two custom tools:
analyze_biometrics(): A deterministic, rule-based proxy for an physiological classifier.vehicle_intervention(): An deterministic policy tool that uses transparent rules to determine a safe vehicle action plan.
To mirror a realistic SDV deployment, a token-based complexity estimator is run over the agent's reasoning trace. If the token count fits a defined budget, it is flagged as “In-Vehicle SoC” (Edge); otherwise, it is tagged as “Cloud.” This simulates dynamic compute routing to minimize latency for safety-critical tasks.
(Note: Refer to the diagram displayed in the chat interface.)
- Multi-Modal Context Fusion: Demonstrates deep integration across physiology, environment, and vehicle state within a single Pydantic ontology.
- Agentic AI & ReAct: Pushes the state-of-the-art beyond simple chatbots towards proactive, agentic orchestrators that "reason" and call tools.
- Auditable & Safety-First Logic: Keeps all physical vehicle interventions and biometric classifications as transparent, rule-based proxies, ensuring clear accountability and compliance.
- Hardware-Conscious SDV Architecture: Features an explicit model of Edge (In-Vehicle) vs. Cloud compute routing, crucial for next-generation zonal architectures.
src/context_aware_vehicle_agent/ontology.py: Core Pydantic ontology forVehicleContext.src/context_aware_vehicle_agent/agent.py: LangChain agent, tools, and orchestration logic.src/context_aware_vehicle_agent/context_builder.py: Shared context override and merge utilities.src/context_aware_vehicle_agent/schemas.py: Shared API and orchestration response models.src/context_aware_vehicle_agent/api.py: FastAPI backend.src/context_aware_vehicle_agent/ui.py: Streamlit dashboard.tests/: Unit and API tests.main_agent.py,api.py,app.py,ontology.py: Thin top-level compatibility wrappers for local commands and demos.
pip install -r requirements.txt
export OPENAI_API_KEY="your-key-here"python main_agent.py --query "I feel a bit lightheaded, help me."The script will:
- Build a synthetic
VehicleContext(plausible biometrics + traffic + vehicle state). - Run the ReAct agent to:
- Call
analyze_biometricson the ontology. - Call
vehicle_interventionwith the ontology and the classifier result.
- Call
- Print:
- The chosen execution target (
In-Vehicle SoCvsCloud) with an estimated token count. - The final natural language explanation to the driver.
- The chosen execution target (
uvicorn api:app --reload
streamlit run app.py- API root:
http://127.0.0.1:8000/ - API docs:
http://127.0.0.1:8000/docs - Streamlit dashboard:
http://127.0.0.1:8501
pytest