Closed Aleck16 closed 6 years ago
Hey there @Aleck16, I assume that you use NYU dataset. If so, can you please run one experiment where you set lam=0
? Simply (in Train_FuseNet.py) replace the following code block:
# Grid search for lambda values
lambdas = np.linspace(0.0004, 0.005, num=10)
for lam in lambdas:
print(lam)
if dset_type == 'NYU':
model = FuseNet(40)
else:
model = FuseNet(37)
solver = Solver_SS(optim_args={"lr":5e-3, "weight_decay": 0.0005}, loss_func=CrossEntropy2d)
solver.train(model, lam, dset_type, train_loader, test_loader, resume, log_nth=5, num_epochs=300)
with this one:
lam = 0.0
solver = Solver_SS(optim_args={"lr":5e-3, "weight_decay": 0.0005}, loss_func=CrossEntropy2d)
solver.train(model, lam, dset_type, train_loader, test_loader, resume, log_nth=5, num_epochs=300)
No matter how small the lambda is, the gradients backpropagated from the classification head would be affecting the segmentation accuracy. So, you'd be able to obtain the mentioned results with this setup. Please let me know about the results!
P.S.: The results you mentioned are obtained by the Caffe implementation (Org), and the comparison between the PyTorch and Caffe implementation can be seen in the table below.
so 0.373 IoU of SUN RGB-D is obtained from caffe implementation? Using Pytorch can only get 0.262?
Yep, that's correct. In fact, the accuracy wouldn't deviate that much from one framework implementation to another; I believe this rather large discrepancy occurred mostly due to the excluded images from SUN RGB-D dataset during the initial experiments, as we previously discussed in #3.
Hi, @zanilzanzan . Hello, we experimented according to the method you provided, the parameters are as follows:
Lam=0.0
Model = FuseNet(40)
Solver = Solver_SS(optim_args={"lr": 1e-2, "weight_decay": 0.0005}, loss_func=CrossEntropy2d)
Solver.train(model, lam, dset_type, train_loader, test_loader, resume, log_nth=5, num_epochs=300)
Batch_size = 4
My precision is as follows:
Pixel-wise accuracy: 0.656 mean class-wise IoU accuracy: 0.288 mean accuracy: 0.430
Still can't reach the accuracy written in your code:
pixel-wise accuracy: 0.668 mean class-wise IoU accuracy: 0.300 mean accuracy: 0.44
By the way, in your source code, the weight of the NYU dataset in the "def CrossEntropy2d()" method is commented out, I don't know if it has an impact on the precision.
Hello, I use PyTorch to run your code locally and find that the precision doesn't reach the accuracy you mentioned on github: "It gives 66.0% global pixelwise accuracy, 43.4% average classwise accuracy and 32.7% average classwise IoU." We only reached 64.8% global pixelwise accuracy, 41.0% average classwise accuracy and 27.8% average classwise IoU.
We are very anxious at the moment. I don't know where the problem is. I hope you can give us some help and suggestions. Thank you.