Open zjw11111 opened 2 years ago
默认的参数需要至少12GB
可以尝试改小一下通道数 或者是 迭代次数
默认的参数需要至少12GB
我说少了,按照作者提供的命令,至少要22G。
我这句话意思是如果Batchsize设置的是1的话需要12G
Hello, I have been running out of memory after running it, and tried many methods but it still doesn’t work. I hope to get your reply.
First, thank you for your attention to our work !
Here is the advice: Step 1)Please check whether the code can be run without any bug and all the settings are configured correctly Step 2) According to your GPU, reducing the number of dual channels "num_channel", the number of iterative stages "S", and the number of ResBlocks in every ProxNet "T" in train.py would alleviate the OOM issue. However, how to set these key parameters reasonably , please refer to our previous work.
默认的参数需要至少12GB
我说少了,按照作者提供的命令,至少要22G。
我这句话意思是如果Batchsize设置的是1的话需要12G
是的,依据我们论文中的默认设置(batchSize=1, image size=416 * 416, S=10, T=4, num_channel=32), 显存的确需要22G左右。如果GPU显存不够的话,可以考虑降低迭代次数、对偶通道数、残差块数量,或者重新生成尺寸较小的弦图和CT图像,其中前三个因素根据自身的GPU显存尽量做调试吧,性能应该是会掉一些的。具体这三个因素的趋势分析可以参考我们之前相关的工作
默认的参数需要至少12GB
我说少了,按照作者提供的命令,至少要22G。 我这句话意思是如果Batchsize设置的是1的话需要12G
是的,依据我们论文中的默认设置(batchSize=1, image size=416 * 416, S=10, T=4, num_channel=32), 显存的确需要22G左右。如果GPU显存不够的话,可以考虑降低迭代次数、对偶通道数、残差块数量,或者重新生成尺寸较小的弦图和CT图像,其中前三个因素根据自身的GPU显存尽量做调试吧,性能应该是会掉一些的。具体这三个因素的趋势分析可以参考我们之前相关的工作
nice work! I'm curious how long it took to run this experiment. it seems to be really slow on v100.
默认的参数需要至少12GB
我说少了,按照作者提供的命令,至少要22G。 我这句话意思是如果Batchsize设置的是1的话需要12G
是的,依据我们论文中的默认设置(batchSize=1, image size=416 * 416, S=10, T=4, num_channel=32), 显存的确需要22G左右。如果GPU显存不够的话,可以考虑降低迭代次数、对偶通道数、残差块数量,或者重新生成尺寸较小的弦图和CT图像,其中前三个因素根据自身的GPU显存尽量做调试吧,性能应该是会掉一些的。具体这三个因素的趋势分析可以参考我们之前相关的工作
nice work! I'm curious how long it took to run this experiment. it seems to be really slow on v100.
Thanks for your attention. With the default settings reported in our paper, it would take 2-3 days for 100 epochs on a single V100.
Hello, I have been running out of memory after running it, and tried many methods but it still doesn’t work. I hope to get your reply.