-
prepare data
mkdir ./datasets/SIDD
-
google drive or 百度网盘,
-
python scripts/data_preparation/sidd.py
to crop the train image pairs to 512x512 patches and make the data into lmdb format.- it should be like:
./datasets/SIDD/Data ./datasets/SIDD/train ./datasets/SIDD/val ./datasets/SIDD/test ./datasets/SIDD/test/ValidationNoisyBlocksSrgb.mat ./datasets/SIDD/test/ValidationGtBlocksSrgb.mat
- it should be like:
-
python scripts/data_preparation/sidd.py
-
to crop the train image pairs to 512x512 patches and make the data into lmdb format.
- google drive or 百度网盘,
- it should be like
./datasets/SIDD/val/input_crops.lmdb
and./datasets/SIDD/val/gt_crops.lmdb
-
Your model:
python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/train.py -opt options/train/SIDD/xxxx.yml --launcher pytorch
-
8 gpus by default. Set
--nproc_per_node
to # of gpus for distributed validation.
- Your model:
python -m torch.distributed.launch --nproc_per_node=1 --master_port=4321 basicsr/test.py -opt ./options/test/SIDD/xxx.yml --launcher pytorch
- Test by a single gpu by default. Set
--nproc_per_node
to # of gpus for distributed validation.