ricky40403 / DSQ

pytorch implementation of "Differentiable Soft Quantization: Bridging Full-Precision and Low-Bit Neural Networks"
129 stars 17 forks source link

forward() does not recognize own class attributes #7

Open afonso-sousa opened 4 years ago

afonso-sousa commented 4 years ago

Hello. First of all, congratulation on your work. I would like to reproduce your work but I am facing a strange problem. When trying to use your DSQConv layer, I get the following error: "torch.nn.modules.module.ModuleAttributeError: 'DSQConv' object has no attribute 'running_lw'" Changing register_buffer for simple attribute assignment did not fix it. Can you reproduce the problem or help me in any way?

I would also like to ask you how can I store the models in an encoded way to compare the storage savings of lower bit range solutions.

Thank you in advance.

mkimhi commented 2 years ago

Hello. First of all, congratulation on your work. I would like to reproduce your work but I am facing a strange problem. When trying to use your DSQConv layer, I get the following error: "torch.nn.modules.module.ModuleAttributeError: 'DSQConv' object has no attribute 'running_lw'" Changing register_buffer for simple attribute assignment did not fix it. Can you reproduce the problem or help me in any way?

I would also like to ask you how can I store the models in an encoded way to compare the storage savings of lower bit range solutions.

Thank you in advance.

Did you figure out a solution? i'm having the same issue

yyl-github-1896 commented 2 years ago

Hi, I had the same problem with you, and I fixed it. It seems that the versioning of "PyTransformer" respository has some problems. If you directly clone the respository, for example, the file "/PyTransformer/transformers/quantize.py" will has a length of 146 lines, but in the original respository, it should has 217 lines. You can download the "PyTransformer" in .zip format and unzip it to use.