Skip to content

Abhigyan126/Snake-DQN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Snake AI DQN

A Python implementation of the classic Snake game powered by Deep Q-Learning Network (DQN). This project combines PyGame for visualization with PyTorch for deep reinforcement learning, creating an AI agent that learns to play Snake through experience.

SnakeDQN.mp4

Features

  • Classic Snake game implementation using PyGame
  • Deep Q-Learning Network (DQN) with PyTorch
  • Experience replay for stable training
  • Model save/load functionality
  • Real-time visualization of training
  • Configurable hyperparameters
  • Support for both CPU and CUDA training

Requirements

pygame
numpy
torch

Installation

  1. Clone the repository:
git clone https://github.com/Abhigyan126/Snake-DQN
cd Snake-DQN
  1. Install dependencies:
pip install pygame numpy torch

Usage

Run the main script to start training:

python train.py

You'll be presented with three options:

  1. Start new training
  2. Continue training existing model
  3. Exit

The training process will display:

  • Real-time game visualization
  • Episode progress
  • Total rewards
  • Current exploration rate (epsilon)

Technical Details

Training pipeline

flowchart LR
    subgraph Game["Snake Game Environment"]
        State["State\n(12 inputs)"] --> Action
        Action --> Reward
        Action --> NextState["Next State"]
    end

    subgraph Agent["DQN Agent"]
        Policy["Policy Network\n(5 layers)"]
        Target["Target Network\n(5 layers)"]
        Memory["Replay Buffer\n(10000 capacity)"]
        
        Policy --> |"Select Action"| Action
        State --> Policy
        Memory --> |"Batch (64)"| Policy
        Policy --> |"Update"| Target
    end

    State --> Memory
    Action --> Memory
    Reward --> Memory
    NextState --> Memory
Loading

Neural Network Architecture

  • Input layer: 12 nodes (state space)
  • Hidden layers: 64 → 64 → 128 → 64 nodes
  • Output layer: 4 nodes (action space)
  • Activation function: Leaky ReLU

State Representation

The game state consists of 12 binary values:

  • Danger detection (4 directions)
  • Current direction (4 possibilities)
  • Food location relative to snake (4 directions)

Training Parameters

  • Replay buffer size: 10,000
  • Batch size: 64
  • Learning rate: 0.001
  • Discount factor (gamma): 0.99
  • Initial epsilon: 1.0
  • Minimum epsilon: 0.01
  • Epsilon decay: 0.995

Project Structure

snake-ai-dqn/
├── train.py              # Main game and training logic
├── test.py               # Test scrip to run the model
├── README.md             # Project documentation
└── snake_dqn_model.pth   # Saved model checkpoints

About

A PyGame implementation of Snake game trained using Deep Q-Learning Network (DQN), combining classic gameplay with modern reinforcement learning.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages