QuickStart of nnUNet
nnUNet
Environment
Use Linux/MacOS. WSL2 is recommended for Windows.
create virtual environment, anaconda is recommended.
install jupyter notebook, check here
install pytorch>=2.0, check pytorch
Prepare nnUNet
clone and install nnunet
1
2
3git clone git@github.com:MIC-DKFZ/nnUNet.git --depth 1
cd nnUNet/
pip install -e . # install as develop modeclone and install hiddenlayer (visualize model topology graph)
1
pip install --upgrade git+https://github.com/FabianIsensee/hiddenlayer.git
prepare dataset
create data directory
1
2
3
4
5
6
7
8
9
10
11
12./nnUNet_raw_data_base/
├── nnUNet_preprocessed
├── nnUNet_raw
│ ├── nnUNet_cropped_data
│ └── nnUNet_raw_data
│ └── Dataset001_teeth
│ ├── dataset.json
│ ├── imagesTr
│ ├── imagesTs
| ├── inferTs
│ └── labelsTr
└── nnUNet_resultsdataset.json
1
2
3
4
5
6
7
8
9
10
11
12
13{
"channel_names":
{
"0": "CBCT"
},
"labels":
{
"background": 0,
"Teeth": 1
},
"numTraining": 12,
"file_ending": ".nii.gz"
}put your training data to imagesTr/labelsTr; imagesTr for testing
remove postfix of labels
1
2
3import os
for file in os.listdir('.'):
os.renames(file, file.replace('_0000.nii', '.nii'))modify the
nnUNet_raw
,nnUNet_preprocessed
,nnUNet_results
to your own path inpaths.py
dataset preprocessing:
nnUNetv2_plan_and_preprocess -d <dataset_id> --verify_dataset_integrity
Train
- training:
nnUNetv2_train <dataset_id> <UNET_CONFIGURATION> <FOLD>
- if you do not know how to set
<UNET_CONFIGURATION>
, just run all of them2d, 3d_fullres, 3d_lowres, 3d_cascade_lowres
and let nnUnet to decide. FOLD
specifies which fold of the 5-fold-cross-validation is trained. useall
if you do not want to use FOLD.- add
--npz
flag to enablennUNetv2_find_best_configuration
later.
- if you do not know how to set
Inference
1 | export INPUT=/lustre/home/acct-laurence/laurence-user1/wuchen/nnUNet/nnUNet_raw_data_base/nnUNet_raw_data/Task001_teeth/imagesTs |