mit-han-lab / tiny-training

On-Device Training Under 256KB Memory [NeurIPS'22]
http://tinytraining.mit.edu
MIT License
429 stars 59 forks source link

From training to IR translation #4

Open kgorgor opened 1 year ago

kgorgor commented 1 year ago

Hi @songhan @zhijian-liu @Lyken17 @tonylins @synxlin . To my understanding (correct me if I'm wrong), the first step is to train models and get model weights saved in .pth files, which is to complete the steps in the README file in the "algorithm" folder. The second step is to translate pytorch models into .pkl and .json files, which is to complete the "compilation" folder.

I have completed the two steps separately, but the problem is how to connect them. In other words, how do we use the .pth files we get from the first step and perform the translation in the second step? I tried simply model.load_state_dict, but the model I got from the first step had its mcu_head_type as "fp" while the script in the second step (mcu_ir_gen.py) required it to be "quantized". And I also tried to do the first step with mcu_head_type as "quantized", but it caused a huge accuracy drop.

I would appreciate it if you could provide some help!

Lyken17 commented 1 year ago

.pth only stores the weights for pytorch model and the IR generation is performed by manually convertion.

You may refer to https://github.com/mit-han-lab/tiny-training/blob/main/compilation/mcu_ir_gen.py for detailed process.

kgorgor commented 1 year ago

Yes, let me rephrase my question. What should be done if I want to modify the weights manually we get from the first step, and load it to the model to convert in the second step?

csjimmylu commented 1 year ago

Hi @kgorgor,

Were you able to run the code generation.py file that converts your model into C++ code (from the other MIT tinyengine repo) with your own customized model for on-device training?

If yes, did the 3 files you used to compile your customized model came from:

  1. graph.json file under the .model/testproj folder after you ran the compilation/ir2json.py from MIT's tiny-training repo?
  2. params.pkl file under the .model/testproj folder after you ran the compilation/ir2json.py from MIT's tiny-training repo?
  3. scale.json file under the ir_zoos/proxyless_quantize folder after you ran the compilation/mcu_ir_gen.py from MIT's tiny-training repo?

Thanks!

kgorgor commented 1 year ago

Thanks for your reply! @729557989

Did you mean https://github.com/mit-han-lab/tinyengine? I didn't run any code from it and I didn't find the generation.py in it. I thought I would be able to translate my customized pytorch models and get the IRs just using the codes in this repo, right?