Graph-COM / HEPT

[ICML'24 Oral] LSH-Based Efficient Point Transformer (HEPT)
https://arxiv.org/abs/2402.12535
MIT License
17 stars 5 forks source link

Error in backwards #2

Open pmcvay opened 1 week ago

pmcvay commented 1 week ago

I am trying to test your transformer architecture. However, I run into these errors when computing gradients.

Using static graphs

    Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
RuntimeError: Your training graph has changed in this iteration, e.g., one parameter is unused in 
first iteration, but then got used in the second iteration. this is not compatible with static_graph 
set to True.

Without static graph

    if torch.is_grad_enabled() and self.reducer._rebuild_buckets():
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`, and by
making sure all `forward` function outputs participate in calculating loss.
If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).
Parameter indices which did not receive grad for rank 16: 0 1 2 3 4 5 6 7 8 9 10 11 12 31 46 61 76 91 106
 In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print out information about which particular parameters did not receive gradient on this rank as part of this error
siqim commented 13 hours ago

This is likely due to the CUDA version used. You can test the code with Cuda 12.1. Otherwise, you could disable the compiling step.