Skip to content

Commit ae1bba0

Browse files
committed
update readme for JSRT and JSRT2
update readme for JSRT and JSRT2
1 parent a472035 commit ae1bba0

2 files changed

Lines changed: 6 additions & 4 deletions

File tree

seg_nll/JSRT/README.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -24,13 +24,15 @@ Currently, the following methods are available in PyMIC:
2424
## Data
2525
The [JSRT][jsrt_link] dataset is used in this demo. It consists of 247 chest radiographs. We have preprocessed the images by resizing them to 256x256 and extracting the lung masks for the segmentation task. The images are available at `PyMIC_data/JSRT`. The images are split into 180, 20 and 47 for training, validation and testing, respectively.
2626

27-
For training images, we simulate noisy labels for 171 images (95%) and keep the clean label for 9 (5%) images. Run `python noise_simulate.py` to generate niosy labels based on dilation, erosion and edge distortion. The following figure shows simulated noisy labels compared with the ground truth clean label. The .csv files for data split are saved in `config/data`.
27+
[jsrt_link]:http://db.jsrt.or.jp/eng.php
28+
29+
For training images, we simulate noisy labels for 171 images (95%) and keep the clean label for 9 (5%) images. Run `python noise_simulate.py` to generate niosy labels based on dilation, erosion and edge distortion. The output noisy labels are saved in `PyMIC_data/JSRT/label_noise1`. The following figure shows simulated noisy labels compared with the ground truth clean label. The .csv files for data split are saved in `config/data`.
2830

2931
![noisy_label](./picture/noisy_label.png)
3032

3133

3234
## Training
33-
In this demo, we experiment with five methods: GCE loss, co-teaching, Trinet and DAST, and they are compared with the baseline of learning with a cross entropy loss. All these methods use UNet2D as the backbone network.
35+
In this demo, we experiment with five methods: GCE loss, co-teaching, Trinet and DAST, and the baseline of learning with a cross entropy loss. All these methods use UNet2D as the backbone network.
3436

3537
### Baseline Method
3638
The dataset setting is similar to that in the `segmentation/JSRT` demo. See `config/unet2d_ce.cfg` for details. Here we use a slightly different setting of data and loss function:
@@ -54,7 +56,7 @@ pymic_run test config/unet_ce.cfg
5456
```
5557

5658
### GCE Loss
57-
The configuration file for using GCE loss is `config/unet2d_gce.cfg`. The configuration is the same as that in the baseline except the loss function:
59+
The configuration file for using GCE loss is `config/unet2d_gce.cfg`. The configuration is the same as that in the baseline except for the loss function:
5860

5961
```bash
6062
...

segmentation/JSRT2/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
![image_example](../JSRT/picture/JPCLN001.png)
44
![label_example](../JSRT/picture/JPCLN001_seg.png)
55

6-
In this example, we show how to use a customized CNN and a customized loss function to segment the lung from X-Ray images. The configurations are the same as those in the `JSRT` example except the network structure and loss function.
6+
In this example, we show how to use a customized CNN and a customized loss function to segment the lung from X-Ray images. The configurations are the same as those in the `JSRT` example except for the network structure and loss function.
77

88
The customized CNN is detailed in `my_net2d.py`, which is a modification of the 2D UNet. In this new network, we use a residual connection in each block. The customized loss is detailed in `my_loss.py`, where we define a focal dice loss named as MyFocalDiceLoss. We use `MyFocalDiceLoss + CrossEntropyLoss` to train the customized network.
99

0 commit comments

Comments
 (0)