Skip to content

Commit c7893a0

Browse files
committed
chore(src): link all scripts to code base and update README.
1 parent 387dd97 commit c7893a0

68 files changed

Lines changed: 10 additions & 8019 deletions

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

Dockerfile

Lines changed: 0 additions & 41 deletions
This file was deleted.

README.md

Lines changed: 10 additions & 54 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,8 @@ DeFlow: Decoder of Scene Flow Network in Autonomous Driving
99

1010
Task: Scene Flow Estimation in Autonomous Driving.
1111

12+
📜 2024/07/24: Merging all scene flow code to a codebase to update one general repo only. This repo will save DeFlow README and [cluster slurm files](assets/slurm).
13+
1214
🤗 2024/11/18 16:17: Update model and demo data download link through HuggingFace, personally I found that `wget` from the HuggingFace link is much faster than Zenodo.
1315

1416
📜 2024/07/24: Merging SeFlow & DeFlow code together, lighter setup and easier running.
@@ -18,92 +20,46 @@ Task: Scene Flow Estimation in Autonomous Driving.
1820
Pre-trained weights for models are available in [Zenodo](https://zenodo.org/records/13744999)/[HuggingFace](https://huggingface.co/kin-zhang/OpenSceneFlow).
1921
Check usage in [2. Evaluation](#2-evaluation) or [3. Visualization](#3-visualization).
2022

21-
**Scripts** quick view in our scripts:
22-
23-
- `dataprocess/extract_*.py` : pre-process data before training to speed up the whole training time.
24-
[Dataset we included now: Argoverse 2 and Waymo, more on the way: Nuscenes, custom data.]
25-
26-
- `train.py`: Train the model and get model checkpoints. Pls remember to check the config.
27-
28-
- `eval.py` : Evaluate the model on the validation/test set. And also output the zip file to upload to online leaderboard.
29-
30-
- `save.py` : Will save result into h5py file, using [tool/visualization.py] to show results with interactive window.
31-
32-
33-
<details> <summary>🎁 <b>One repository, All methods!</b> </summary>
34-
<!-- <br> -->
35-
You can try following methods in our code without any effort to make your own benchmark.
36-
37-
- [x] [SeFlow](https://arxiv.org/abs/2407.01702) (Ours 🚀): ECCV 2024
38-
- [x] [DeFlow](https://arxiv.org/abs/2401.16122) (Ours 🚀): ICRA 2024
39-
- [x] [FastFlow3d](https://arxiv.org/abs/2103.01306): RA-L 2021
40-
- [x] [ZeroFlow](https://arxiv.org/abs/2305.10424): ICLR 2024, their pre-trained weight can covert into our format easily through [the script](tools/zerof2ours.py).
41-
- [ ] [NSFP](https://arxiv.org/abs/2111.01253): NeurIPS 2021, faster 3x than original version because of [our CUDA speed up](assets/cuda/README.md), same (slightly better) performance. Done coding, public after review.
42-
- [ ] [FastNSF](https://arxiv.org/abs/2304.09121): ICCV 2023. Done coding, public after review.
43-
<!-- - [ ] [Flow4D](https://arxiv.org/abs/2407.07995): 1st supervise network in the new leaderboard. Done coding, public after review. -->
44-
- [ ] ... more on the way
45-
46-
</details>
47-
48-
💡: Want to learn how to add your own network in this structure? Check [Contribute](assets/README.md#contribute) section and know more about the code. Fee free to pull request!
49-
5023
## 0. Setup
5124

52-
**Environment**: Clone the repo and build the environment, check [detail installation](assets/README.md) for more information. [Conda](https://docs.conda.io/projects/miniconda/en/latest/)/[Mamba](https://github.com/mamba-org/mamba) is recommended.
25+
**Environment**: Clone the repo and build the environment, check [detail installation](./OpenSceneFlow/assets/README.md) for more information. [Conda](https://docs.conda.io/projects/miniconda/en/latest/)/[Mamba](https://github.com/mamba-org/mamba) is recommended.
5326

5427

5528
```bash
56-
git clone --recursive https://github.com/KTH-RPL/DeFlow.git
57-
cd DeFlow
29+
git clone --recursive https://github.com/KTH-RPL/OpenSceneFlow.git
30+
cd OpenSceneFlow
5831
mamba env create -f environment.yaml
5932
```
6033

6134
CUDA package (need install nvcc compiler), the compile time is around 1-5 minutes:
6235
```bash
63-
mamba activate deflow
36+
mamba activate opensf
6437
# CUDA already install in python environment. I also tested others version like 11.3, 11.4, 11.7, 11.8 all works
6538
cd assets/cuda/mmcv && python ./setup.py install && cd ../../..
6639
cd assets/cuda/chamfer3D && python ./setup.py install && cd ../../..
6740
```
6841

69-
Or another environment setup choice is [Docker](https://en.wikipedia.org/wiki/Docker_(software)) which isolated environment, you can pull it by.
70-
If you have different arch, please build it by yourself `cd DeFlow && docker build -t zhangkin/seflow` by going through [build-docker-image](assets/README.md/#build-docker-image) section.
71-
```bash
72-
# option 1: pull from docker hub
73-
docker pull zhangkin/seflow
42+
Or another environment setup choice is [Docker](https://en.wikipedia.org/wiki/Docker_(software)) which isolated environment, check more information in [OpenSceneFlow/assets/README.md](./OpenSceneFlow/assets/README.md#docker-environment).
43+
7444

75-
# run container
76-
docker run -it --gpus all -v /dev/shm:/dev/shm -v /home/kin/data:/home/kin/data --name deflow zhangkin/seflow /bin/zsh
77-
# then `mamba activate seflow` python environment is ready to use
78-
```
7945

8046
## 1. Run & Train
8147

8248
Note: Prepare raw data and process train data only needed run once for the task. No need repeat the data process steps till you delete all data. We use [wandb](https://wandb.ai/) to log the training process, and you may want to change all `entity="kth-rpl"` to your own entity.
8349

8450
### Data Preparation
8551

86-
Check [dataprocess/README.md](dataprocess/README.md#argoverse-20) for downloading tips for the raw Argoverse 2 dataset. Or maybe you want to have the **mini processed dataset** to try the code quickly, We directly provide one scene inside `train` and `val`. It already converted to `.h5` format and processed with the label data.
52+
Check [OpenSceneFlow/dataprocess/README.md](./OpenSceneFlow/dataprocess/README.md#argoverse-20) for downloading tips for the raw Argoverse 2 dataset. Or maybe you want to have the **mini processed dataset** to try the code quickly, We directly provide one scene inside `train` and `val`. It already converted to `.h5` format and processed with the label data.
8753
You can download it from [Zenodo](https://zenodo.org/records/13744999/files/demo_data.zip)/[HuggingFace](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/demo_data.zip) and extract it to the data folder. And then you can skip following steps and directly run the [training script](#train-the-model).
8854

8955
```bash
9056
wget https://huggingface.co/kin-zhang/OpenSceneFlow/resolve/main/demo_data.zip
9157
unzip demo_data.zip -p /home/kin/data/av2
9258
```
9359

94-
#### Prepare raw data
95-
96-
Checking more information (step for downloading raw data, storage size, #frame etc) in [dataprocess/README.md](dataprocess/README.md). Extract all data to unified `.h5` format.
97-
[Runtime: Normally need 45 mins finished run following commands totally in setup mentioned in our paper]
98-
```bash
99-
python dataprocess/extract_av2.py --av2_type sensor --data_mode train --argo_dir /home/kin/data/av2 --output_dir /home/kin/data/av2/preprocess_v2
100-
python dataprocess/extract_av2.py --av2_type sensor --data_mode val --mask_dir /home/kin/data/av2/3d_scene_flow
101-
python dataprocess/extract_av2.py --av2_type sensor --data_mode test --mask_dir /home/kin/data/av2/3d_scene_flow
102-
```
103-
10460
### Train the model
10561

106-
All local benchmarking methods and ablation studies can be done through command with different config, check [`assets/slurm`](assets/slurm) for all the commands we used in DeFlow raw paper. You can check all parameters in [conf/config.yaml](conf/config.yaml) and [conf/model/deflow.yaml](conf/model/deflow.yaml), **if you will set wandb_mode=online**, maybe change all `entity="kth-rpl"` to your own account name.
62+
All local benchmarking methods and ablation studies can be done through command with different config, check [`assets/slurm`](assets/slurm) for all the commands we used in DeFlow raw paper. You can check all parameters in [OpenSceneFlow/conf/config.yaml](./OpenSceneFlow/conf/config.yaml) and [OpenSceneFlow/conf/model/deflow.yaml](./OpenSceneFlow/conf/model/deflow.yaml), **if you will set wandb_mode=online**, maybe change all `entity="kth-rpl"` to your own account name.
10763

10864
Train DeFlow with the leaderboard submit config. [Runtime: Around 6-8 hours in 4x A100 GPUs.] Please change `batch_size`&`lr` accoordingly if you don't have enough GPU memory. (e.g. `batch_size=6` for 24GB GPU)
10965
```bash

assets/README.md

Lines changed: 0 additions & 115 deletions
This file was deleted.

assets/cuda/README.md

Lines changed: 0 additions & 21 deletions
This file was deleted.

0 commit comments

Comments
 (0)