Open SavageVic opened 6 months ago
To enhance your training process, consider adjusting the following parameters:
('--num_rays', type=int, default=4096 * 16, help="num rays sampled per image for each training step")
('--cuda_ray', action='store_true', help="use CUDA raymarching instead of pytorch")
('--max_steps', type=int, default=16, help="max num steps sampled per ray (only valid when using --cuda_ray)")
('--num_steps', type=int, default=16, help="num steps sampled per ray (only valid when NOT using --cuda_ray)")
('--upsample_steps', type=int, default=0, help="num steps up-sampled per ray (only valid when NOT using --cuda_ray)")
('--update_extra_interval', type=int, default=16, help="iter interval to update extra status (only valid when using --cuda_ray)")
('--max_ray_batch', type=int, default=4096, help="batch size of rays at inference to avoid OOM (only valid when NOT using --cuda_ray)")
thanks very much.
Best Regards. Chris
Chia-Lin @.***> 于2023年12月27日周三 15:19写道:
To enhance your training process, consider adjusting the following parameters:
('--num_rays', type=int, default=4096 * 16, help="num rays sampled per image for each training step") ('--cuda_ray', action='store_true', help="use CUDA raymarching instead of pytorch") ('--max_steps', type=int, default=16, help="max num steps sampled per ray (only valid when using --cuda_ray)") ('--num_steps', type=int, default=16, help="num steps sampled per ray (only valid when NOT using --cuda_ray)") ('--upsample_steps', type=int, default=0, help="num steps up-sampled per ray (only valid when NOT using --cuda_ray)") ('--update_extra_interval', type=int, default=16, help="iter interval to update extra status (only valid when using --cuda_ray)") ('--max_ray_batch', type=int, default=4096, help="batch size of rays at inference to avoid OOM (only valid when NOT using --cuda_ray)")
— Reply to this email directly, view it on GitHub https://github.com/Fictionarry/ER-NeRF/issues/117#issuecomment-1870021373, or unsubscribe https://github.com/notifications/unsubscribe-auth/BEWE4NTQEUOYBHESRLQ6AZLYLPDW3AVCNFSM6AAAAABBCWN7ZKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNZQGAZDCMZXGM . You are receiving this because you authored the thread.Message ID: @.***>
Hi Chia-Lin, thanks very much
Best wishes and regards, Chris
Christine Tang @.***> 于2023年12月27日周三 18:52写道:
thanks very much.
Best Regards. Chris
Chia-Lin @.***> 于2023年12月27日周三 15:19写道:
To enhance your training process, consider adjusting the following parameters:
('--num_rays', type=int, default=4096 * 16, help="num rays sampled per image for each training step") ('--cuda_ray', action='store_true', help="use CUDA raymarching instead of pytorch") ('--max_steps', type=int, default=16, help="max num steps sampled per ray (only valid when using --cuda_ray)") ('--num_steps', type=int, default=16, help="num steps sampled per ray (only valid when NOT using --cuda_ray)") ('--upsample_steps', type=int, default=0, help="num steps up-sampled per ray (only valid when NOT using --cuda_ray)") ('--update_extra_interval', type=int, default=16, help="iter interval to update extra status (only valid when using --cuda_ray)") ('--max_ray_batch', type=int, default=4096, help="batch size of rays at inference to avoid OOM (only valid when NOT using --cuda_ray)")
— Reply to this email directly, view it on GitHub https://github.com/Fictionarry/ER-NeRF/issues/117#issuecomment-1870021373, or unsubscribe https://github.com/notifications/unsubscribe-auth/BEWE4NTQEUOYBHESRLQ6AZLYLPDW3AVCNFSM6AAAAABBCWN7ZKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNZQGAZDCMZXGM . You are receiving this because you authored the thread.Message ID: @.***>
('--num_rays', type=int, default=4096 * 16, help="num rays sampled per image for each training step")
('--cuda_ray', action='store_true', help="use CUDA raymarching instead of pytorch")
('--max_steps', type=int, default=16, help="max num steps sampled per ray (only valid when using --cuda_ray)")
('--num_steps', type=int, default=16, help="num steps sampled per ray (only valid when NOT using --cuda_ray)")
('--upsample_steps', type=int, default=0, help="num steps up-sampled per ray (only valid when NOT using --cuda_ray)")
('--update_extra_interval', type=int, default=16, help="iter interval to update extra status (only valid when using --cuda_ray)")
('--max_ray_batch', type=int, default=4096, help="batch size of rays at inference to avoid OOM (only valid when NOT using --cuda_ray)")
看了和本身的并没有什么改变!我想问下按照上面的配置 这个应该如何改变?
I have set the patch size to 64, but I can't find some obvioius training speed enhancement at 3090 24G GPU.
Any one could give some suggestions. Thanks very much in advance.
Best regards, Chris