First, thanks for your interesting project in the field!
From this study, I am particularly interested in the voxel super resolution.
By studying this repo, I was able to perform voxel super resolution from pre-trained model. But I am wondering how it is been trained.
If I have a new 3D dataset, how can I possibly train voxel super resolution?
I see that the output model generated in the order of [input->encode->decode->output] has dimensionality of [b,32,32,32,32] as [batch_size, channel, spatial_dim1, spatial_dim2, spatial_dim3].
In the inference, it can perform super resolution by using MISE, but I don't know how I can possibly train a new super resolution model with new dataset.
First, thanks for your interesting project in the field!
From this study, I am particularly interested in the voxel super resolution. By studying this repo, I was able to perform voxel super resolution from pre-trained model. But I am wondering how it is been trained. If I have a new 3D dataset, how can I possibly train voxel super resolution? I see that the output model generated in the order of [input->encode->decode->output] has dimensionality of [b,32,32,32,32] as [batch_size, channel, spatial_dim1, spatial_dim2, spatial_dim3]. In the inference, it can perform super resolution by using MISE, but I don't know how I can possibly train a new super resolution model with new dataset.
Thanks for reading!