To accelerate the inference process, I attempted to adapt the inference code for multi-batch processing. However, I encountered an issue where the original NeRF rendering code appears to be incompatible with a batch_size greater than 1. This is primarily due to numerous operations, such as .view(-1,), that eliminate the batch dimension. Could you please provide any insights or plans regarding the implementation of multi-batch inference? Is such an adaptation feasible?To accelerate the inference process, I attempted to adapt the inference code for multi-batch processing. However, I encountered an issue where the original NeRF rendering code appears to be incompatible with a batch_size greater than 1. This is primarily due to numerous operations, such as .view(-1,), that eliminate the batch dimension. Could you please provide any insights or plans regarding the implementation of multi-batch inference? Is such an adaptation feasible?
To accelerate the inference process, I attempted to adapt the inference code for multi-batch processing. However, I encountered an issue where the original NeRF rendering code appears to be incompatible with a batch_size greater than 1. This is primarily due to numerous operations, such as .view(-1,), that eliminate the batch dimension. Could you please provide any insights or plans regarding the implementation of multi-batch inference? Is such an adaptation feasible?To accelerate the inference process, I attempted to adapt the inference code for multi-batch processing. However, I encountered an issue where the original NeRF rendering code appears to be incompatible with a batch_size greater than 1. This is primarily due to numerous operations, such as .view(-1,), that eliminate the batch dimension. Could you please provide any insights or plans regarding the implementation of multi-batch inference? Is such an adaptation feasible?