Skip to content

Commit 3561358

Browse files
committed
update configure files for examples
1 parent 9068c24 commit 3561358

19 files changed

Lines changed: 87 additions & 77 deletions

File tree

classification/AntBee/README.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -13,14 +13,14 @@ In this example, we finetune a pretrained resnet18 for classification of images
1313
[data_link]:https://download.pytorch.org/tutorial/hymenoptera_data.zip
1414

1515
## Finetuning all layers of resnet18
16-
1. Here we use resnet18 for finetuning, and update all the layers. Open the configure file `config/train_test_ce1.cfg`. In the `network` section we can find details for the network. Here `update_layers = 0` means updating all the layers.
16+
1. Here we use resnet18 for finetuning, and update all the layers. Open the configure file `config/train_test_ce1.cfg`. In the `network` section we can find details for the network. Here `update_mode = all` means updating all the layers.
1717
```bash
1818
# type of network
1919
net_type = resnet18
2020
pretrain = True
2121
input_chns = 3
2222
# finetune all the layers
23-
update_layers = 0
23+
update_mode = all
2424
```
2525

2626
Then start to train by running:
@@ -48,20 +48,20 @@ pymic_run test config/train_test_ce1.cfg
4848
pymic_eval_cls config/evaluation.cfg
4949
```
5050

51-
The obtained accuracy by default setting should be around 0.9412, and the AUC will be around 0.976.
51+
The obtained accuracy by default setting should be around 0.9477, and the AUC will be around 0.9745.
5252

5353
3. Run `python show_roc.py` to show the receiver operating characteristic curve.
5454

5555
![roc](./picture/roc.png)
5656

5757
## Finetuning the last layer of resnet18
58-
Similarly to the above example, we further try to only finetune the last layer of resnet18 for the same classification task. Use a different configure file `config/train_test_ce2.cfg` for training and testing, where `update_layers = -1` in the `network` section means updating the last layer only:
58+
Similarly to the above example, we further try to only finetune the last layer of resnet18 for the same classification task. Use a different configure file `config/train_test_ce2.cfg` for training and testing, where `update_mode = last` in the `network` section means updating the last layer only:
5959
```bash
6060
net_type = resnet18
6161
pretrain = True
6262
input_chns = 3
6363
# finetune the last layer only
64-
update_layers = -1
64+
update_mode = last
6565
```
6666

67-
Edit `config/evaluation.cfg` accordinly for evaluation.
67+
Edit `config/evaluation.cfg` accordinly for evaluation. The corresponding accuracy and AUC would be around 0.9477 and 0.9778, respectively.

classification/AntBee/config/train_test_ce1.cfg

Lines changed: 12 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -14,8 +14,8 @@ train_batch_size = 4
1414
modal_num = 1
1515

1616
# data transforms
17-
train_transform = [Rescale, RandomCrop, RandomFlip, NormalizeWithMeanStd]
18-
valid_transform = [Rescale, CenterCrop, NormalizeWithMeanStd]
17+
train_transform = [Rescale, RandomCrop, RandomFlip, NormalizeWithMeanStd, LabelToProbability]
18+
valid_transform = [Rescale, CenterCrop, NormalizeWithMeanStd, LabelToProbability]
1919
test_transform = [Rescale, CenterCrop, NormalizeWithMeanStd]
2020

2121
Rescale_output_size = [256, 256]
@@ -39,7 +39,7 @@ net_type = resnet18
3939
pretrain = True
4040
input_chns = 3
4141
# finetune all the layers
42-
update_layers = 0
42+
update_mode = all
4343

4444
# number of classes
4545
class_num = 2
@@ -56,19 +56,20 @@ learning_rate = 1e-3
5656
momentum = 0.9
5757
weight_decay = 1e-5
5858

59-
# for lr schedular (MultiStepLR)
60-
lr_scheduler = MultiStepLR
61-
lr_gamma = 0.1
62-
lr_milestones = [500, 1000]
59+
# for lr schedular (StepLR)
60+
lr_scheduler = StepLR
61+
lr_gamma = 0.5
62+
lr_step = 500
6363

64-
ckpt_save_dir = model/resnet18_ce1
65-
ckpt_prefix = resnet18
64+
ckpt_save_dir = model/resnet18_ce1
65+
ckpt_prefix = resnet18
6666

6767
# iteration
6868
iter_start = 0
69-
iter_max = 1500
69+
iter_max = 2000
7070
iter_valid = 100
71-
iter_save = 1500
71+
iter_save = 2000
72+
early_stop_patience = 1000
7273

7374
[testing]
7475
# list of gpus

classification/AntBee/config/train_test_ce2.cfg

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -14,8 +14,8 @@ train_batch_size = 4
1414
modal_num = 1
1515

1616
# data transforms
17-
train_transform = [Rescale, RandomCrop, RandomFlip, NormalizeWithMeanStd]
18-
valid_transform = [Rescale, CenterCrop, NormalizeWithMeanStd]
17+
train_transform = [Rescale, RandomCrop, RandomFlip, NormalizeWithMeanStd, LabelToProbability]
18+
valid_transform = [Rescale, CenterCrop, NormalizeWithMeanStd, LabelToProbability]
1919
test_transform = [Rescale, CenterCrop, NormalizeWithMeanStd]
2020

2121
Rescale_output_size = [256, 256]
@@ -39,8 +39,7 @@ net_type = resnet18
3939
pretrain = True
4040
input_chns = 3
4141
# finetune the last layer only
42-
update_layers = -1
43-
42+
update_mode = last
4443

4544
# number of classes
4645
class_num = 2
@@ -57,19 +56,20 @@ learning_rate = 1e-3
5756
momentum = 0.9
5857
weight_decay = 1e-5
5958

60-
# for lr schedular (MultiStepLR)
61-
lr_scheduler = MultiStepLR
62-
lr_gamma = 0.1
63-
lr_milestones = [500, 1000]
59+
# for lr schedular (StepLR)
60+
lr_scheduler = StepLR
61+
lr_gamma = 0.5
62+
lr_step = 500
6463

6564
ckpt_save_dir = model/resnet18_ce2
6665
ckpt_prefix = resnet18
6766

6867
# iteration
6968
iter_start = 0
70-
iter_max = 1500
69+
iter_max = 2000
7170
iter_valid = 100
72-
iter_save = 1500
71+
iter_save = 2000
72+
early_stop_patience = 1000
7373

7474
[testing]
7575
# list of gpus

classification/CHNCXR/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ net_type = resnet18
2020
pretrain = True
2121
input_chns = 3
2222
# finetune all the layers
23-
update_layers = 0
23+
update_mode = all
2424
```
2525

2626
Start to train by running:
@@ -48,12 +48,12 @@ pymic_run test config/net_resnet18.cfg
4848
pymic_eval_cls config/evaluation.cfg
4949
```
5050

51-
The obtained accuracy by default setting should be around 0.8571, and the AUC is 0.94.
51+
The obtained accuracy by default setting should be around 0.8271, and the AUC is 0.9343.
5252

5353
3. Run `python show_roc.py` to show the receiver operating characteristic curve.
5454

5555
![roc](./picture/roc.png)
5656

5757

5858
## Finetuning vgg16
59-
Similarly to the above example, we further try to finetune vgg16 for the same classification task. Use a different configure file `config/net_vg16.cfg` for training and testing. Edit `config/evaluation.cfg` accordinly for evaluation. The iteration number for the highest accuracy on the validation set was 2300, and the accuracy will be around 0.8797.
59+
Similarly to the above example, we further try to finetune vgg16 for the same classification task. Use a different configure file `config/net_vg16.cfg` for training and testing. Edit `config/evaluation.cfg` accordinly for evaluation. The accuracy and AUC would be around 0.8571 and 0.9271, respectively.

classification/CHNCXR/config/net_resnet18.cfg

Lines changed: 9 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -14,8 +14,8 @@ train_batch_size = 4
1414
modal_num = 1
1515

1616
# data transforms
17-
train_transform = [Rescale, RandomCrop, RandomFlip, GrayscaleToRGB, NormalizeWithMeanStd]
18-
valid_transform = [Rescale, CenterCrop, GrayscaleToRGB, NormalizeWithMeanStd]
17+
train_transform = [Rescale, RandomCrop, RandomFlip, GrayscaleToRGB, NormalizeWithMeanStd, LabelToProbability]
18+
valid_transform = [Rescale, CenterCrop, GrayscaleToRGB, NormalizeWithMeanStd, LabelToProbability]
1919
test_transform = [Rescale, CenterCrop, GrayscaleToRGB, NormalizeWithMeanStd]
2020

2121
Rescale_output_size = [256, 256]
@@ -37,7 +37,7 @@ net_type = resnet18
3737
pretrain = True
3838
input_chns = 3
3939
# finetune all the layers
40-
update_layers = 0
40+
update_mode = all
4141

4242
# number of classes
4343
class_num = 2
@@ -54,10 +54,10 @@ learning_rate = 1e-3
5454
momentum = 0.9
5555
weight_decay = 1e-5
5656

57-
# for lr schedular (MultiStepLR)
58-
lr_scheduler = MultiStepLR
59-
lr_gamma = 0.1
60-
lr_milestones = [1500, 3000]
57+
# for lr schedular (StepLR)
58+
lr_scheduler = StepLR
59+
lr_gamma = 0.5
60+
lr_step = 1000
6161

6262
ckpt_save_dir = model/resnet18
6363
ckpt_prefix = resnet18
@@ -66,7 +66,8 @@ ckpt_prefix = resnet18
6666
iter_start = 0
6767
iter_max = 5000
6868
iter_valid = 100
69-
iter_save = 1000
69+
iter_save = 5000
70+
early_stop_patience = 2000
7071

7172
[testing]
7273
# list of gpus

classification/CHNCXR/config/net_vgg16.cfg

Lines changed: 9 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -14,8 +14,8 @@ train_batch_size = 4
1414
modal_num = 1
1515

1616
# data transforms
17-
train_transform = [Rescale, RandomCrop, RandomFlip, GrayscaleToRGB, NormalizeWithMeanStd]
18-
valid_transform = [Rescale, CenterCrop, GrayscaleToRGB, NormalizeWithMeanStd]
17+
train_transform = [Rescale, RandomCrop, RandomFlip, GrayscaleToRGB, NormalizeWithMeanStd, LabelToProbability]
18+
valid_transform = [Rescale, CenterCrop, GrayscaleToRGB, NormalizeWithMeanStd, LabelToProbability]
1919
test_transform = [Rescale, CenterCrop, GrayscaleToRGB, NormalizeWithMeanStd]
2020

2121
Rescale_output_size = [256, 256]
@@ -37,7 +37,7 @@ net_type = vgg16
3737
pretrain = True
3838
input_chns = 3
3939
# finetune all the layers
40-
update_layers = 0
40+
update_mode = all
4141

4242
# number of classes
4343
class_num = 2
@@ -54,10 +54,10 @@ learning_rate = 1e-3
5454
momentum = 0.9
5555
weight_decay = 1e-5
5656

57-
# for lr schedular (MultiStepLR)
58-
lr_scheduler = MultiStepLR
59-
lr_gamma = 0.1
60-
lr_milestones = [1500, 3000]
57+
# for lr schedular (StepLR)
58+
lr_scheduler = StepLR
59+
lr_gamma = 0.5
60+
lr_step = 1000
6161

6262
ckpt_save_dir = model/vgg16
6363
ckpt_prefix = vgg16
@@ -66,7 +66,8 @@ ckpt_prefix = vgg16
6666
iter_start = 0
6767
iter_max = 5000
6868
iter_valid = 100
69-
iter_save = 1000
69+
iter_save = 5000
70+
early_stop_patience = 2000
7071

7172
[testing]
7273
# list of gpus

seg_ssl/ACDC/config/unet2d_baseline.cfg

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,6 @@ GammaCorrection_gamma_max = 1.5
4040
GaussianNoise_channels = [0]
4141
GaussianNoise_mean = 0
4242
GaussianNoise_std = 0.05
43-
GaussianNoise_probability = 0.5
4443

4544
[network]
4645
# this section gives parameters for network
@@ -55,11 +54,11 @@ in_chns = 1
5554
feature_chns = [16, 32, 64, 128, 256]
5655
dropout = [0.0, 0.0, 0.0, 0.5, 0.5]
5756
bilinear = True
58-
deep_supervise= False
57+
multiscale_pred = False
5958

6059
[training]
6160
# list of gpus
62-
gpus = [0]
61+
gpus = [1]
6362

6463
loss_type = [DiceLoss, CrossEntropyLoss]
6564
loss_weight = [0.5, 0.5]
@@ -82,7 +81,7 @@ ckpt_save_dir = model/unet2d_baseline
8281
iter_start = 0
8382
iter_max = 30000
8483
iter_valid = 100
85-
iter_save = [30000]
84+
iter_save = 30000
8685

8786
[testing]
8887
# list of gpus

seg_ssl/ACDC/config/unet2d_cps.cfg

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@ in_chns = 1
5858
feature_chns = [16, 32, 64, 128, 256]
5959
dropout = [0.0, 0.0, 0.0, 0.5, 0.5]
6060
bilinear = True
61-
deep_supervise= False
61+
multiscale_pred = False
6262

6363
[training]
6464
# list of gpus

seg_ssl/ACDC/config/unet2d_em.cfg

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@ in_chns = 1
5858
feature_chns = [16, 32, 64, 128, 256]
5959
dropout = [0.0, 0.0, 0.0, 0.5, 0.5]
6060
bilinear = True
61-
deep_supervise= False
61+
multiscale_pred = False
6262

6363
[training]
6464
# list of gpus

seg_ssl/ACDC/config/unet2d_mt.cfg

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ in_chns = 1
5959
feature_chns = [16, 32, 64, 128, 256]
6060
dropout = [0.0, 0.0, 0.0, 0.5, 0.5]
6161
bilinear = True
62-
deep_supervise= False
62+
multiscale_pred = False
6363

6464
[training]
6565
# list of gpus

0 commit comments

Comments
 (0)