Our model comes from the MMSegmentation library. In order to use MMsegmentation, please follow the official tutorial : https://github.com/open-mmlab/mmsegmentation#readme
-
Stanford Healthcare provides an open source cardiac echocardiography dataset, containing 10,030 apical-4-chamber echocardiography videos. The dataset is publicly available at https://echonet.github.io/dynamic/
-
Based on the video dataset, we proposed a dataset of 1047 images, each image was manually annotated left and right ventricles. The dataset is available here
-
Download the dataset and extract it into
./data/
, without creating subfolders.
- Pretrained models can be downloaded here. To reproduce the results, the pretrained models(*.pth) should be placed in
./pretrained/
, and then test model directly
- Train the
deeplabv3_unet_s5-d16
model:
python tools/train.py configs/unet/deeplabv3_unet_s5-d16_128x128_40k_Car_0505.py --work-dir=pretrained/2class_cardiac
- For testing, run:
python tools/test.py configs/unet/deeplabv3_unet_s5-d16_128x128_40k_Car_0505.py\
pretrained/2class_cardiac/latest.pth \
--show-dir results
- To reproduce the results, move the pretrained models(*.pth) downloaded from here to
./pretrained/
, and then test model directly.
- Calculate the
mean
andstd
of custom dataset:
python ./get_mean.py ${img_pth}
- Save the output image sequence as a video:
python ./save_video.py ${original_pth} ${target_pth}