Demonstrating how intelligent, goal-directed behavior can emerge from simple perception–action loops — without symbolic rules, stored world models, or explicit planning.
Traditional AI systems rely on symbolic knowledge — rules, maps, and explicit reasoning. This project explores the opposite: can an agent behave intelligently with zero stored representations?
The answer is yes. The agent reaches its goal purely through state observation and reward feedback — no planning, no memory, no rules.
Observe current position
↓
Select action
↓
Receive reward signal
↓
Update behavior → Repeat
No symbolic rules. No explicit planning. No stored world model.
- Reactive Intelligence — responds to environment directly
- Emergent Behavior — goal-directed actions arise from simple loops
- Reward-Based Adaptation — learns from feedback, not from rules
- Minimal Representation AI — inspired by Rodney Brooks' subsumption architecture
| Layer | Technology |
|---|---|
| Agent Logic | Python (agent.py, environment.py) |
| Visualization | HTML + CSS + JavaScript |
| Deployment | GitHub Pages |
git clone https://github.com/j26219096-prog/intelligence-without-representation.git
cd intelligence-without-representation
python app.pyOr just open index.html in your browser for the visual demo.
- Multi-agent interaction
- Dynamic goal states
- Reinforcement learning extension
- Visualization of learning dynamics
Jawahar R — BTech AI & Data Science, Dhanalakshmi Srinivasan Engineering College
GitHub
MIT