You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+10-54Lines changed: 10 additions & 54 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,6 +9,8 @@ DeFlow: Decoder of Scene Flow Network in Autonomous Driving
9
9
10
10
Task: Scene Flow Estimation in Autonomous Driving.
11
11
12
+
📜 2024/07/24: Merging all scene flow code to a codebase to update one general repo only. This repo will save DeFlow README and [cluster slurm files](assets/slurm).
13
+
12
14
🤗 2024/11/18 16:17: Update model and demo data download link through HuggingFace, personally I found that `wget` from the HuggingFace link is much faster than Zenodo.
@@ -18,92 +20,46 @@ Task: Scene Flow Estimation in Autonomous Driving.
18
20
Pre-trained weights for models are available in [Zenodo](https://zenodo.org/records/13744999)/[HuggingFace](https://huggingface.co/kin-zhang/OpenSceneFlow).
19
21
Check usage in [2. Evaluation](#2-evaluation) or [3. Visualization](#3-visualization).
20
22
21
-
**Scripts** quick view in our scripts:
22
-
23
-
-`dataprocess/extract_*.py` : pre-process data before training to speed up the whole training time.
24
-
[Dataset we included now: Argoverse 2 and Waymo, more on the way: Nuscenes, custom data.]
25
-
26
-
-`train.py`: Train the model and get model checkpoints. Pls remember to check the config.
27
-
28
-
-`eval.py` : Evaluate the model on the validation/test set. And also output the zip file to upload to online leaderboard.
29
-
30
-
-`save.py` : Will save result into h5py file, using [tool/visualization.py] to show results with interactive window.
31
-
32
-
33
-
<details> <summary>🎁 <b>One repository, All methods!</b> </summary>
34
-
<!-- <br> -->
35
-
You can try following methods in our code without any effort to make your own benchmark.
-[x][ZeroFlow](https://arxiv.org/abs/2305.10424): ICLR 2024, their pre-trained weight can covert into our format easily through [the script](tools/zerof2ours.py).
41
-
-[ ][NSFP](https://arxiv.org/abs/2111.01253): NeurIPS 2021, faster 3x than original version because of [our CUDA speed up](assets/cuda/README.md), same (slightly better) performance. Done coding, public after review.
42
-
-[ ][FastNSF](https://arxiv.org/abs/2304.09121): ICCV 2023. Done coding, public after review.
43
-
<!-- - [ ] [Flow4D](https://arxiv.org/abs/2407.07995): 1st supervise network in the new leaderboard. Done coding, public after review. -->
44
-
-[ ] ... more on the way
45
-
46
-
</details>
47
-
48
-
💡: Want to learn how to add your own network in this structure? Check [Contribute](assets/README.md#contribute) section and know more about the code. Fee free to pull request!
49
-
50
23
## 0. Setup
51
24
52
-
**Environment**: Clone the repo and build the environment, check [detail installation](assets/README.md) for more information. [Conda](https://docs.conda.io/projects/miniconda/en/latest/)/[Mamba](https://github.com/mamba-org/mamba) is recommended.
25
+
**Environment**: Clone the repo and build the environment, check [detail installation](./OpenSceneFlow/assets/README.md) for more information. [Conda](https://docs.conda.io/projects/miniconda/en/latest/)/[Mamba](https://github.com/mamba-org/mamba) is recommended.
CUDA package (need install nvcc compiler), the compile time is around 1-5 minutes:
62
35
```bash
63
-
mamba activate deflow
36
+
mamba activate opensf
64
37
# CUDA already install in python environment. I also tested others version like 11.3, 11.4, 11.7, 11.8 all works
65
38
cd assets/cuda/mmcv && python ./setup.py install &&cd ../../..
66
39
cd assets/cuda/chamfer3D && python ./setup.py install &&cd ../../..
67
40
```
68
41
69
-
Or another environment setup choice is [Docker](https://en.wikipedia.org/wiki/Docker_(software)) which isolated environment, you can pull it by.
70
-
If you have different arch, please build it by yourself `cd DeFlow && docker build -t zhangkin/seflow` by going through [build-docker-image](assets/README.md/#build-docker-image) section.
71
-
```bash
72
-
# option 1: pull from docker hub
73
-
docker pull zhangkin/seflow
42
+
Or another environment setup choice is [Docker](https://en.wikipedia.org/wiki/Docker_(software)) which isolated environment, check more information in [OpenSceneFlow/assets/README.md](./OpenSceneFlow/assets/README.md#docker-environment).
43
+
74
44
75
-
# run container
76
-
docker run -it --gpus all -v /dev/shm:/dev/shm -v /home/kin/data:/home/kin/data --name deflow zhangkin/seflow /bin/zsh
77
-
# then `mamba activate seflow` python environment is ready to use
78
-
```
79
45
80
46
## 1. Run & Train
81
47
82
48
Note: Prepare raw data and process train data only needed run once for the task. No need repeat the data process steps till you delete all data. We use [wandb](https://wandb.ai/) to log the training process, and you may want to change all `entity="kth-rpl"` to your own entity.
83
49
84
50
### Data Preparation
85
51
86
-
Check [dataprocess/README.md](dataprocess/README.md#argoverse-20) for downloading tips for the raw Argoverse 2 dataset. Or maybe you want to have the **mini processed dataset** to try the code quickly, We directly provide one scene inside `train` and `val`. It already converted to `.h5` format and processed with the label data.
52
+
Check [OpenSceneFlow/dataprocess/README.md](./OpenSceneFlow/dataprocess/README.md#argoverse-20) for downloading tips for the raw Argoverse 2 dataset. Or maybe you want to have the **mini processed dataset** to try the code quickly, We directly provide one scene inside `train` and `val`. It already converted to `.h5` format and processed with the label data.
87
53
You can download it from [Zenodo](https://zenodo.org/records/13744999/files/demo_data.zip)/[HuggingFace](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/demo_data.zip) and extract it to the data folder. And then you can skip following steps and directly run the [training script](#train-the-model).
Checking more information (step for downloading raw data, storage size, #frame etc) in [dataprocess/README.md](dataprocess/README.md). Extract all data to unified `.h5` format.
97
-
[Runtime: Normally need 45 mins finished run following commands totally in setup mentioned in our paper]
python dataprocess/extract_av2.py --av2_type sensor --data_mode val --mask_dir /home/kin/data/av2/3d_scene_flow
101
-
python dataprocess/extract_av2.py --av2_type sensor --data_mode test --mask_dir /home/kin/data/av2/3d_scene_flow
102
-
```
103
-
104
60
### Train the model
105
61
106
-
All local benchmarking methods and ablation studies can be done through command with different config, check [`assets/slurm`](assets/slurm) for all the commands we used in DeFlow raw paper. You can check all parameters in [conf/config.yaml](conf/config.yaml) and [conf/model/deflow.yaml](conf/model/deflow.yaml), **if you will set wandb_mode=online**, maybe change all `entity="kth-rpl"` to your own account name.
62
+
All local benchmarking methods and ablation studies can be done through command with different config, check [`assets/slurm`](assets/slurm) for all the commands we used in DeFlow raw paper. You can check all parameters in [OpenSceneFlow/conf/config.yaml](./OpenSceneFlow/conf/config.yaml) and [OpenSceneFlow/conf/model/deflow.yaml](./OpenSceneFlow/conf/model/deflow.yaml), **if you will set wandb_mode=online**, maybe change all `entity="kth-rpl"` to your own account name.
107
63
108
64
Train DeFlow with the leaderboard submit config. [Runtime: Around 6-8 hours in 4x A100 GPUs.] Please change `batch_size`&`lr` accoordingly if you don't have enough GPU memory. (e.g. `batch_size=6` for 24GB GPU)
0 commit comments