«YOLOv3» reproduced the paper "YOLOv3: An Incremental Improvement"
- Train using the
COCO train2017dataset and test using theCOCO val2017dataset with an input size of416x416. give the result as follows (No version of the COCO dataset used in the paper was found)
| Original (darknet) | DeNA/PyTorch_YOLOv3 | zjykzj/YOLOv3(This) | |
|---|---|---|---|
| ARCH | YOLOv3 | YOLOv3 | YOLOv3 |
| COCO AP[IoU=0.50:0.95] | 0.310 | 0.311 | 0.400 |
| COCO AP[IoU=0.50] | 0.553 | 0.558 | 0.620 |
- [2024/05/19]v5.1. Optimize YOLOv3Loss. Use BCELoss instead of MSELoss for confidence loss calculation.
- [2024/05/09]v5.0. Refactoring YOLOv3 project, integrating yolov5 v7.0, reimplementing YOLOv3/YOLOv3-fast and YOLOv3Loss.
- [2023/07/19]v4.0. Add ultralytics/yolov5(485da42) transforms and support AMP training.
- [2023/06/22]v3.2. Remove Excess Code and Implementation.
- [2023/06/22]v3.1. Reconstruct DATA Module and Preprocessing Module.
- [2023/05/24]v3.0. Refer to zjykzj/YOLOv2 to reconstruct the entire project and train
Pascal VOCandCOCOdatasets withYOLOv2Loss. - [2023/04/16]v2.0. Fixed preprocessing implementation, YOLOv3 network performance close to the original paper implementation.
- [2023/02/16]v1.0. implementing preliminary YOLOv3 network training and inference implementation.
The purpose of creating this warehouse is to better understand the YOLO series object detection network. Note: The realization of the project depends heavily on the implementation of DeNA/PyTorch_YOLOv3 and NVIDIA/apex
Note: the latest implementation of YOLOv3 in our warehouse is entirely based on ultralytics/yolov5 v7.0
pip3 install -r requirements.txtOr use docker container
docker run -it --runtime nvidia --gpus=all --shm-size=16g -v /etc/localtime:/etc/localtime -v $(pwd):/workdir --workdir=/workdir --name yolov3 ultralytics/yolov5:v7.0python3 train.py --data VOC.yaml --weights "" --cfg yolov3_voc.yaml --img 640 --device 0 --yolov3loss
python3 train.py --data VOC.yaml --weights "" --cfg yolov3-fast_voc.yaml --img 640 --device 0 --yolov3loss
python3 train.py --data coco.yaml --weights "" --cfg yolov3_coco.yaml --img 640 --device 0 --yolov3loss
python3 train.py --data coco.yaml --weights "" --cfg yolov3-fast_coco.yaml --img 640 --device 0 --yolov3loss# python3 val.py --weights runs/yolov3_voc.pt --data VOC.yaml --device 0
yolov3_voc summary: 198 layers, 67238145 parameters, 0 gradients, 151.5 GFLOPs
Class Images Instances P R mAP50 mAP50-95: 100%|██████████| 155/155 01:04
all 4952 12032 0.811 0.742 0.816 0.568
aeroplane 4952 285 0.938 0.791 0.897 0.607
Speed: 0.1ms pre-process, 6.8ms inference, 1.6ms NMS per image at shape (32, 3, 640, 640)
# python3 val.py --weights runs/yolov3-fast_voc.pt --data VOC.yaml --device 0
yolov3-fast_voc summary: 108 layers, 39945921 parameters, 0 gradients, 76.0 GFLOPs
Class Images Instances P R mAP50 mAP50-95: 100%|██████████| 155/155 00:52
all 4952 12032 0.734 0.704 0.745 0.45
aeroplane 4952 285 0.759 0.747 0.796 0.427
Speed: 0.1ms pre-process, 4.3ms inference, 1.6ms NMS per image at shape (32, 3, 640, 640)
# python3 val.py --weights runs/yolov3_coco.pt --data coco.yaml --device 0
yolov3_coco summary: 198 layers, 67561245 parameters, 0 gradients, 152.5 GFLOPs
Speed: 0.1ms pre-process, 6.9ms inference, 2.1ms NMS per image at shape (32, 3, 640, 640)
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.400
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.620
# python3 val.py --weights runs/yolov3-fast_coco.pt --data coco.yaml --device 0
yolov3-fast_coco summary: 108 layers, 40269021 parameters, 0 gradients, 77.0 GFLOPs
Speed: 0.1ms pre-process, 4.4ms inference, 2.2ms NMS per image at shape (32, 3, 640, 640)
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.329
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.560python3 detect.py --weights runs/yolov3_voc.pt --source ./assets/voc2007-test/python3 detect.py --weights runs/yolov3_coco.pt --source ./assets/coco/- zhujian - Initial work - zjykzj
Anyone's participation is welcome! Open an issue or submit PRs.
Small note:
- Git submission specifications should be complied with Conventional Commits
- If versioned, please conform to the Semantic Versioning 2.0.0 specification
- If editing the README, please conform to the standard-readme specification.
Apache License 2.0 © 2022 zjykzj




