Open vanAken opened 11 months ago
Hi,have you solved the problem?I have exactly the same problem as you.Can your code run now?If so,please send me a copy,thank you.
Can we discuss it?My email address is loiroyaletorah@gmail.com
I stop working on this, but i'm still interested in. I wait for an update. I'm on vacation for some weeks ...
Hi there Your idea of using the transformer for event-based depth estimation is great.
I'm working on Python 3.9 and CUDA 11.8 and trying to update the environment.
The first is the summery function in train.py error discription:
https://discuss.pytorch.org/t/lstm-tuple-object-has-no-attribute-size/100397
from torchsummary import summary
from torchinfo import summary
In model/loss.py, PerceptualSimilarity is changed to a newer version called LPIPS. https://github.com/richzhang/PerceptualSimilarity
from PerceptualSimilarity import models
import lpips
and in line 180ff
class perceptual_loss(): def init(self, net='vgg', use_gpu=True): """ Wrapper for PerceptualSimilarity.models.PerceptualLoss import lpips loss_fn_alex = lpips.LPIPS(net='alex') # best forward scores loss_fn_vgg = lpips.LPIPS(net='vgg') # closer to "traditional" perceptual loss, when used for optimization """ self.model = lpips.LPIPS(net='vgg') # volker models.PerceptualLoss(net=net, use_gpu=use_gpu)
self.weight = weight
but now the error
Are the 2 updates I am using doing the right thing? How do I get x into the right shape, and what should it be for the data given?
Before I dive deeper into the code, it would be nice if you could give me some advice on how to overcome this problem.