Skip to content

Commit a472035

Browse files
committed
update readme
update readme
1 parent 4ff0a26 commit a472035

2 files changed

Lines changed: 55 additions & 146 deletions

File tree

seg_nll/JSRT/README.md

Lines changed: 55 additions & 146 deletions
Original file line numberDiff line numberDiff line change
@@ -22,207 +22,116 @@ Currently, the following methods are available in PyMIC:
2222

2323

2424
## Data
25-
The [ACDC][ACDC_link] (Automatic Cardiac Diagnosis Challenge) dataset is used in this demo. It contains 200 short-axis cardiac cine MR images of 100 patients, and the classes for segmentation are: Right Ventricle (RV), Myocardiym (Myo) and Left Ventricle (LV). [Valvano et al.][scribble_link] provided scribble annotations of this dataset. The images and scribble annotations are available in `PyMIC_data/ACDC/preprocess`, where we have normalized the intensity to [0, 1]. You can download `PyMIC_data` from .... The images are split at patient level into 70%, 10% and 20% for training, validation and testing, respectively (see `config/data` for details).
25+
The [JSRT][jsrt_link] dataset is used in this demo. It consists of 247 chest radiographs. We have preprocessed the images by resizing them to 256x256 and extracting the lung masks for the segmentation task. The images are available at `PyMIC_data/JSRT`. The images are split into 180, 20 and 47 for training, validation and testing, respectively.
26+
27+
For training images, we simulate noisy labels for 171 images (95%) and keep the clean label for 9 (5%) images. Run `python noise_simulate.py` to generate niosy labels based on dilation, erosion and edge distortion. The following figure shows simulated noisy labels compared with the ground truth clean label. The .csv files for data split are saved in `config/data`.
28+
29+
![noisy_label](./picture/noisy_label.png)
2630

27-
[ACDC_link]:https://www.creatis.insa-lyon.fr/Challenge/acdc/databases.html
28-
[scribble_link]:https://gvalvano.github.io/wss-multiscale-adversarial-attention-gates/data
2931

3032
## Training
31-
In this demo, we experiment with five methods: EM, TV, GatedCRF, USTM and DMPLS, and they are compared with the baseline of learning from annotated pixels with partial CE loss. All these methods use UNet2D as the backbone network.
33+
In this demo, we experiment with five methods: GCE loss, co-teaching, Trinet and DAST, and they are compared with the baseline of learning with a cross entropy loss. All these methods use UNet2D as the backbone network.
3234

3335
### Baseline Method
34-
The dataset setting is similar to that in the `seg_ssl/ACDC` demo. Here we use a slightly different setting of data transform:
36+
The dataset setting is similar to that in the `segmentation/JSRT` demo. See `config/unet2d_ce.cfg` for details. Here we use a slightly different setting of data and loss function:
3537

3638
```bash
37-
tensor_type = float
39+
...
3840
task_type = seg
39-
root_dir = /home/disk2t/projects/PyMIC_project/PyMIC_data/ACDC/preprocess
40-
train_csv = config/data/image_train.csv
41-
valid_csv = config/data/image_valid.csv
42-
test_csv = config/data/image_test.csv
43-
train_batch_size = 4
44-
45-
# data transforms
46-
train_transform = [Pad, RandomCrop, RandomFlip, NormalizeWithMeanStd, PartialLabelToProbability]
47-
valid_transform = [NormalizeWithMeanStd, Pad, LabelToProbability]
48-
test_transform = [NormalizeWithMeanStd, Pad]
49-
50-
Pad_output_size = [4, 224, 224]
51-
Pad_ceil_mode = False
52-
53-
RandomCrop_output_size = [3, 224, 224]
54-
RandomCrop_foreground_focus = False
55-
RandomCrop_foreground_ratio = None
56-
Randomcrop_mask_label = None
57-
58-
RandomFlip_flip_depth = False
59-
RandomFlip_flip_height = True
60-
RandomFlip_flip_width = True
61-
62-
NormalizeWithMeanStd_channels = [0]
63-
```
64-
65-
Please note that we use a `PartialLabelToProbability` class to convert the partial labels into a one-hot segmentation map and a mask for annotated pixels. The mask is used as a pixel weighting map in `CrossEntropyLoss`, so that parial CE loss is calculated as a weighted CE loss, i.e., the weight for unannotated pixels is 0.
66-
67-
68-
The configuration of 2D UNet is:
69-
70-
```bash
71-
net_type = UNet2D
72-
class_num = 4
73-
in_chns = 1
74-
feature_chns = [16, 32, 64, 128, 256]
75-
dropout = [0.0, 0.0, 0.0, 0.5, 0.5]
76-
bilinear = True
77-
deep_supervise= False
78-
```
79-
80-
For training, we use the CrossEntropyLoss with pixel weighting (i.e., partial CE loss), and train the network by the `Adam` optimizer. The maximal iteration is 20k, and the training is early stopped if there is not performance improvement on the validation set for 8k iteratins. The learning rate scheduler is `ReduceLROnPlateau`. The corresponding configuration is:
81-
82-
```bash
83-
gpus = [0]
41+
root_dir = ../../PyMIC_data/JSRT
42+
train_csv = config/data/jsrt_train_mix.csv
43+
valid_csv = config/data/jsrt_valid.csv
44+
test_csv = config/data/jsrt_test.csv
45+
...
8446
loss_type = CrossEntropyLoss
85-
86-
# for optimizers
87-
optimizer = Adam
88-
learning_rate = 1e-3
89-
momentum = 0.9
90-
weight_decay = 1e-5
91-
92-
# for lr schedular
93-
lr_scheduler = ReduceLROnPlateau
94-
lr_gamma = 0.5
95-
ReduceLROnPlateau_patience = 2000
96-
early_stop_patience = 8000
97-
ckpt_save_dir = model/unet2d_baseline
98-
99-
# start iter
100-
iter_start = 0
101-
iter_max = 20000
102-
iter_valid = 100
103-
iter_save = [2000, 20000]
104-
```
105-
106-
During inference, we use a sliding window of 3x224x224, and post process the results by `KeepLargestComponent`. The configuration is:
107-
```bash
108-
# checkpoint mode can be [0-latest, 1-best, 2-specified]
109-
ckpt_mode = 1
110-
output_dir = result/unet2d_baseline
111-
post_process = KeepLargestComponent
112-
113-
sliding_window_enable = True
114-
sliding_window_size = [3, 224, 224]
115-
sliding_window_stride = [3, 224, 224]
11647
```
11748

11849
The following commands are used for training and inference with this method, respectively:
11950

12051
```bash
121-
pymic_run train config/unet2d_baseline.cfg
122-
pymic_run test config/unet2d_baseline.cfg
52+
pymic_run train config/unet_ce.cfg
53+
pymic_run test config/unet_ce.cfg
12354
```
12455

125-
### Entropy Minimization
126-
The configuration file for Entropy Minimization is `config/unet2d_em.cfg`. The data configuration has been described above, and the settings for data augmentation, network, optmizer, learning rate scheduler and inference are the same as those in the baseline method. Specific setting for Entropy Minimization is:
56+
### GCE Loss
57+
The configuration file for using GCE loss is `config/unet2d_gce.cfg`. The configuration is the same as that in the baseline except the loss function:
12758

12859
```bash
129-
wsl_method = EntropyMinimization
130-
regularize_w = 0.1
131-
rampup_start = 2000
132-
rampup_end = 15000
60+
...
61+
loss_type = GeneralizedCELoss
62+
...
13363
```
13464

135-
where wet the weight of the regularization loss as 0.1, rampup is used to gradually increase it from 0 t 0.1.
136-
13765
The following commands are used for training and inference with this method, respectively:
13866

13967
```bash
140-
pymic_wsl train config/unet2d_em.cfg
141-
pymic_run test config/unet2d_em.cfg
68+
pymic_run train config/unet_gce.cfg
69+
pymic_run test config/unet_gce.cfg
14270
```
14371

144-
### TV
145-
The configuration file for TV is `config/unet2d_tv.cfg`. The corresponding setting is:
72+
### Co-Teaching
73+
The configuration file for Co-Teaching is `config/unet2d_cot.cfg`. The corresponding setting is:
14674

14775
```bash
148-
wsl_method = TotalVariation
149-
regularize_w = 0.1
150-
rampup_start = 2000
151-
rampup_end = 15000
76+
nll_method = CoTeaching
77+
co_teaching_select_ratio = 0.8
78+
rampup_start = 1000
79+
rampup_end = 8000
15280
```
15381

15482
The following commands are used for training and inference with this method, respectively:
15583
```bash
156-
pymic_wsl train config/unet2d_tv.cfg
157-
pymic_run test config/unet2d_tv.cfg
84+
pymic_nll train config/unet_cot.cfg
85+
pymic_nll test config/unet_cot.cfg
15886
```
15987

160-
### Gated CRF
161-
The configuration file for Gated CRF is `config/unet2d_gcrf.cfg`. The corresponding setting is:
88+
### TriNet
89+
The configuration file for TriNet is `config/unet_trinet.cfg`. The corresponding setting is:
16290

16391
```bash
164-
wsl_method = GatedCRF
165-
regularize_w = 0.1
166-
rampup_start = 2000
167-
rampup_end = 15000
168-
GatedCRFLoss_W0 = 1.0
169-
GatedCRFLoss_XY0 = 5
170-
GatedCRFLoss_rgb = 0.1
171-
GatedCRFLoss_W1 = 1.0
172-
GatedCRFLoss_XY1 = 3
173-
GatedCRFLoss_Radius = 5
92+
nll_method = TriNet
93+
trinet_select_ratio = 0.9
94+
rampup_start = 1000
95+
rampup_end = 8000
17496
```
17597

17698
The following commands are used for training and inference with this method, respectively:
17799

178100
```bash
179-
pymic_wsl train config/unet2d_gcrf.cfg
180-
pymic_run test config/unet2d_gcrf.cfg
101+
pymic_nll train config/unet_trinet.cfg
102+
pymic_nll test config/unet_trinet.cfg
181103
```
182104

183-
### USTM
184-
The configuration file for USTM is `config/unet2d_ustm.cfg`. The corresponding setting is:
105+
### DAST
106+
The configuration file for DAST is `config/unet_dast.cfg`. The corresponding setting is:
185107

186108
```bash
187-
wsl_method = USTM
188-
regularize_w = 0.1
189-
rampup_start = 2000
190-
rampup_end = 15000
109+
nll_method = DAST
110+
dast_dbc_w = 0.1
111+
dast_st_w = 0.1
112+
dast_rank_length = 20
113+
dast_select_ratio = 0.2
114+
rampup_start = 1000
115+
rampup_end = 8000
191116
```
192117

193118
The commands for training and inference are:
194119

195120
```bash
196-
pymic_wsl train config/unet2d_ustm.cfg
197-
pymic_run test config/unet2d_ustm.cfg
198-
```
199-
200-
### DMPLS
201-
The configuration file for DMPLS is `config/unet2d_dmpls.cfg`, and the corresponding setting is:
202-
203-
```bash
204-
wsl_method = DMPLS
205-
regularize_w = 0.1
206-
rampup_start = 2000
207-
rampup_end = 15000
208-
```
209-
210-
The training and inference commands are:
211-
212-
```bash
213-
pymic_ssl train config/unet2d_dmpls.cfg
214-
pymic_run test config/unet2d_dmpls.cfg
121+
pymic_nll train config/unet_dast.cfg
122+
pymic_run test config/unet_dast.cfg
215123
```
216124

217125
## Evaluation
218126
Use `pymic_eval_seg config/evaluation.cfg` for quantitative evaluation of the segmentation results. You need to edit `config/evaluation.cfg` first, for example:
219127

220128
```bash
221129
metric = dice
222-
label_list = [1,2,3]
223-
organ_name = heart
224-
ground_truth_folder_root = /home/disk2t/projects/PyMIC_project/PyMIC_data/ACDC/preprocess
225-
segmentation_folder_root = ./result/unet2d_baseline
226-
evaluation_image_pair = ./config/data/image_test_gt_seg.csv
130+
label_list = [255]
131+
organ_name = lung
132+
133+
ground_truth_folder_root = ../../PyMIC_data/JSRT
134+
segmentation_folder_root = result/unet_ce
135+
evaluation_image_pair = config/data/jsrt_test_gt_seg.csv
227136
```
228137

91.2 KB
Loading

0 commit comments

Comments
 (0)