Closed xiaobaozi1996 closed 4 years ago
Hi,
Sorry, the old links were deactivated. The links have been updated with new models that should be available now.
You should also pull the latest version of the code, as these updated models are slightly different than the original models. The original models were slightly different than those described in the supplementary material of the paper (e.g., bottleneck resolution and final output resolution). The new models and resulting images are thus more consistent with those seen in the paper.
Both models are in Dropbox and the updated links appear to work, are you unable to download the chair model?
On Thu, Dec 5, 2019 at 1:24 AM maoyali notifications@github.com wrote:
The updated model of car dataset is available, When do you update the pretrained model of chair dataset?
— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/kyleolsz/TB-Networks/issues/2?email_source=notifications&email_token=ACEJVUDDHDM5HBIKQFA4N53QXDCDFA5CNFSM4JNHGJ32YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEGABXFI#issuecomment-562043797, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACEJVUGAWUC4MV54CTTJ7VDQXDCDFANCNFSM4JNHGJ3Q .
Thank you very much, it's ok now~
Hi, I test your model quantitatively. The result is as follows:
It's inconsistent with the paper:
The value of L1 exists huge difference? Why? Is there any transformation at the end?
A couple of points:
While these models use the same network architecture as the ones used for the paper, the final numbers for these tests were overall slightly higher than the ones used for the evaluation. We did some code cleanup to make the repo easier for others to use and understand, and wanted to make sure that we could reproduce the original results. We left the training running for a little longer, and ultimately it converged to slightly better values than those in the paper.
To make sure that the numbers reported for the table in the paper for our results and the other approaches were accurate and consistent, those numbers were actually obtained using our results within the Tensorflow evaluation framework in the public code release for Sun et al. 2018. For the images generated using their approach, these numbers appeared to be generally consistent with what they report (seen in the table above). However, we also noticed that for our images the L1 values produced by their evaluation code were higher than the L1 values that we report in our framework, which just reports the raw mean L1 loss.
From looking at the code in their repository, it appears that they apply some rescaling (by a factor of 1.5) to the raw L1 value to the L1 results for the results on ShapeNet objects. They also seem to use a different range for the image pixel values (values in the range of -1 to 1, while in our framework images are in the range of 0 to 1).
So it makes sense that the reported values obtained using their framework would be higher than those in ours. If you just want to use the raw L1 loss for images with pixel values in the range of 0 to 1, you can use the results from our evaluation code.
Ok, I got it! Thanks~ My research is very relative to your Paper. I want to know more details about the model and I send some doubts about it to your email. Please check it~
Hello, the link of your pretrained model on car dataset isn't available now, can you check the link in the readme file? Thanks!