Skip to content

Commit e6cb63c

Browse files
committed
docs(time): train time on our cluster and notes
* fix #12 with detail train gpu in docs.
1 parent 22897f2 commit e6cb63c

1 file changed

Lines changed: 8 additions & 8 deletions

File tree

README.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ DeFlow: Decoder of Scene Flow Network in Autonomous Driving
99

1010
Task: Scene Flow Estimation in Autonomous Driving.
1111

12-
🤗 2024/11/18 16:17: Update model and demo data download link through HuggingFace, Personally I found `wget` from HuggingFace link is much faster than Zenodo.
12+
🤗 2024/11/18 16:17: Update model and demo data download link through HuggingFace, personally I found that `wget` from the HuggingFace link is much faster than Zenodo.
1313

1414
📜 2024/07/24: Merging SeFlow & DeFlow code together, lighter setup and easier running.
1515

@@ -102,22 +102,22 @@ python dataprocess/extract_av2.py --av2_type sensor --data_mode test --mask_dir
102102

103103
### Train the model
104104

105-
All local benchmarking methods and ablation studies can be done through command with different config, check [`assets/slurm`](assets/slurm) for all the commands we used in our experiments.
106-
107-
Best fine-tuned model train with following command by other default config in [conf/config.yaml](conf/config.yaml) and [conf/model/deflow.yaml](conf/model/deflow.yaml), if you will set wandb_mode=online, maybe change all `entity="kth-rpl"` to your own account name.
105+
All local benchmarking methods and ablation studies can be done through command with different config, check [`assets/slurm`](assets/slurm) for all the commands we used in DeFlow raw paper. You can check all parameters in [conf/config.yaml](conf/config.yaml) and [conf/model/deflow.yaml](conf/model/deflow.yaml), **if you will set wandb_mode=online**, maybe change all `entity="kth-rpl"` to your own account name.
108106

107+
Train DeFlow with the leaderboard submit config. [Runtime: Around 6-8 hours in 4x A100 GPUs.] Please change `batch_size`&`lr` accoordingly if you don't have enough GPU memory. (e.g. `batch_size=6` for 24GB GPU)
109108
```bash
110-
python train.py model=deflow lr=2e-4 epochs=20 batch_size=16 loss_fn=deflowLoss
109+
python train.py model=deflow lr=2e-4 epochs=15 batch_size=16 loss_fn=deflowLoss
110+
# baseline in our paper:
111111
python train.py model=fastflow3d lr=4e-5 epochs=20 batch_size=16 loss_fn=ff3dLoss
112112
```
113113

114114
> [!NOTE]
115-
> You may found the different settings in the paper that is all methods are enlarge learning rate to 2e-4 and decrease the epochs to 20 for faster converge and better performance.
116-
> However, we kept the setting on lr=2e-6 and 50 epochs in (SeFlow & DeFlow) paper experiments for the fair comparison with ZeroFlow where we directly use their provided weights.
115+
> You may found the different settings in the paper that is all methods are enlarge learning rate to 2e-4 and decrease the epochs to 15 for faster converge and better performance (it's also our leaderboard model train config).
116+
> However, we kept the setting on lr=2e-6 and 50 epochs in (SeFlow & DeFlow) paper experiments for **the fair comparison** with ZeroFlow where we directly use their provided weights.
117117
> We suggest afterward researchers or users to use the setting here (larger lr and smaller epoch) for faster converge and better performance.
118118
119119
To help community benchmarking, we provide our weights including fastflow3d, deflow in [HuggingFace](https://huggingface.co/kin-zhang/OpenSceneFlow).
120-
These checkpoints also include parameters and status of that epoch inside it. If you are interested in weights of ablation studies, please contact us.
120+
These checkpoints also include parameters and status of that epoch inside it.
121121

122122
## 2. Evaluation
123123

0 commit comments

Comments
 (0)