You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+6-21Lines changed: 6 additions & 21 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ DeFlow: Decoder of Scene Flow Network in Autonomous Driving
9
9
10
10
Task: Scene Flow Estimation in Autonomous Driving.
11
11
12
-
📜 2024/07/24: Merging all scene flow code to a codebase to update one general repo only. This repo will save DeFlow README and [cluster slurm files](assets/slurm).
12
+
📜 2025/02/18: Merging all scene flow code to a codebase to update one general repo only. This repo still saved DeFlow README and [cluster slurm files](assets/slurm).
13
13
14
14
🤗 2024/11/18 16:17: Update model and demo data download link through HuggingFace, personally I found that `wget` from the HuggingFace link is much faster than Zenodo.
15
15
@@ -22,7 +22,7 @@ Check usage in [2. Evaluation](#2-evaluation) or [3. Visualization](#3-visualiza
22
22
23
23
## 0. Setup
24
24
25
-
**Environment**: Clone the repo and build the environment, check [detail installation](./OpenSceneFlow/assets/README.md) for more information. [Conda](https://docs.conda.io/projects/miniconda/en/latest/)/[Mamba](https://github.com/mamba-org/mamba) is recommended.
25
+
**Environment**: Clone the repo and build the environment, check [detail installation](https://github.com/KTH-RPL/OpenSceneFlow/assets/README.md) for more information. [Conda](https://docs.conda.io/projects/miniconda/en/latest/)/[Mamba](https://github.com/mamba-org/mamba) is recommended.
26
26
27
27
28
28
```bash
@@ -39,7 +39,7 @@ cd assets/cuda/mmcv && python ./setup.py install && cd ../../..
39
39
cd assets/cuda/chamfer3D && python ./setup.py install &&cd ../../..
40
40
```
41
41
42
-
Or another environment setup choice is [Docker](https://en.wikipedia.org/wiki/Docker_(software)) which isolated environment, check more information in [OpenSceneFlow/assets/README.md](./OpenSceneFlow/assets/README.md#docker-environment).
42
+
Or another environment setup choice is [Docker](https://en.wikipedia.org/wiki/Docker_(software)) which isolated environment, check more information in [OpenSceneFlow/assets/README.md](https://github.com/KTH-RPL/OpenSceneFlow/assets/README.md#docker-environment).
43
43
44
44
45
45
@@ -49,7 +49,7 @@ Note: Prepare raw data and process train data only needed run once for the task.
49
49
50
50
### Data Preparation
51
51
52
-
Check [OpenSceneFlow/dataprocess/README.md](./OpenSceneFlow/dataprocess/README.md#argoverse-20) for downloading tips for the raw Argoverse 2 dataset. Or maybe you want to have the **mini processed dataset** to try the code quickly, We directly provide one scene inside `train` and `val`. It already converted to `.h5` format and processed with the label data.
52
+
Check [OpenSceneFlow/dataprocess/README.md](https://github.com/KTH-RPL/OpenSceneFlow/dataprocess/README.md#argoverse-20) for downloading tips for the raw Argoverse 2 dataset. Or maybe you want to have the **mini processed dataset** to try the code quickly, We directly provide one scene inside `train` and `val`. It already converted to `.h5` format and processed with the label data.
53
53
You can download it from [Zenodo](https://zenodo.org/records/13744999/files/demo_data.zip)/[HuggingFace](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/demo_data.zip) and extract it to the data folder. And then you can skip following steps and directly run the [training script](#train-the-model).
All local benchmarking methods and ablation studies can be done through command with different config, check [`assets/slurm`](assets/slurm) for all the commands we used in DeFlow raw paper. You can check all parameters in [OpenSceneFlow/conf/config.yaml](./OpenSceneFlow/conf/config.yaml) and [OpenSceneFlow/conf/model/deflow.yaml](./OpenSceneFlow/conf/model/deflow.yaml), **if you will set wandb_mode=online**, maybe change all `entity="kth-rpl"` to your own account name.
62
+
All local benchmarking methods and ablation studies can be done through command with different config, check [`assets/slurm`](assets/slurm) for all the commands we used in DeFlow raw paper. You can check all parameters in [OpenSceneFlow/conf/config.yaml](https://github.com/KTH-RPL/OpenSceneFlow/conf/config.yaml) and [OpenSceneFlow/conf/model/deflow.yaml](https://github.com/KTH-RPL/OpenSceneFlow/conf/model/deflow.yaml), **if you will set wandb_mode=online**, maybe change all `entity="kth-rpl"` to your own account name.
63
63
64
64
Train DeFlow with the leaderboard submit config. [Runtime: Around 6-8 hours in 4x A100 GPUs.] Please change `batch_size`&`lr` accoordingly if you don't have enough GPU memory. (e.g. `batch_size=6` for 24GB GPU)
Check all detailed result files (presented in our paper Table 1) in [this discussion](https://github.com/KTH-RPL/DeFlow/discussions/2).
95
95
96
-
To submit to the Online Leaderboard, if you select `av2_mode=test`, it should be a zip file for you to submit to the leaderboard.
97
-
Note: The leaderboard result in DeFlow main paper is [version 1](https://eval.ai/web/challenges/challenge-page/2010/evaluation), as [version 2](https://eval.ai/web/challenges/challenge-page/2210/overview) is updated after DeFlow paper.
98
-
99
-
```bash
100
-
# since the env may conflict we set new on deflow, we directly create new one:
101
-
mamba create -n py37 python=3.7
102
-
mamba activate py37
103
-
pip install "evalai"
104
-
105
-
# Step 2: login in eval and register your team
106
-
evalai set-token <your token>
107
-
108
-
# Step 3: Copy the command pop above and submit to leaderboard
0 commit comments