I am trying to adopt the PointMLP in classification_ModelNet40/models/pointmlp.py on my own task (the default hyperparameter setting). However, the loss.backward() gets super slow (around 9 seconds for one backward for one batch of 64 pointclouds with 1024 points). When I run your own ModelNet40 experiments with the same configs and hardware/software environment, the training speed is normal. Here is my code:
self.model.train()
for i, sample in enumerate(self.dl_train):
self.optimizer.zero_grad()
pc = sample["xyz"]
img_feats = sample["img"]
pc = pc.cuda()
img_feats = img_feats.cuda()
pc_feats = self.model(pc)
pc2img_loss = self.criterion(pc_feats, img_feats)
pc2img_loss.backward()
self.optimizer.step()
Here is my hardware/software configs:
CPU: Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz
GPU: NVIDIA A100
CUDA: 11.1
PyTorch: 1.8.1
Python: 3.7.16
Can you help me with it? Many thanks.
Hi,
I am trying to adopt the PointMLP in
classification_ModelNet40/models/pointmlp.py
on my own task (the default hyperparameter setting). However, theloss.backward()
gets super slow (around 9 seconds for one backward for one batch of 64 pointclouds with 1024 points). When I run your own ModelNet40 experiments with the same configs and hardware/software environment, the training speed is normal. Here is my code:The loss function is a simple contrastive loss:
Here is my hardware/software configs: CPU: Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz GPU: NVIDIA A100 CUDA: 11.1 PyTorch: 1.8.1 Python: 3.7.16 Can you help me with it? Many thanks.