Closed logankopas closed 6 years ago
Hey @logan-ncc, there have been many changes to the version of master you have been working on and most of the conflicts in your pull request have to do with that more so than the code you added. As such it would be easiest if master was merged into your branch, then made sure everything worked on your branch locally and any changes needed would be made locally on your branch, and then make a pull-request merging your changes back into master which would then happen seamlessly.
So either you can do this, or you could add-me/give-me-permissions on your fork/branch so that I could git-checkout your branch and then I can do it. If you give me the permissions for me to do it all, could you maybe show me an example script/model that currently works for you that I could use to test everything works after I merge with master.
I've merged in master but there were a lot of changes. I still need to test some things to make sure things are working properly
Thanks @logan-ncc !
@nhiggs @jubbens The model I ran overnight seemed to work so I think this is ok to merge. The only thing is that I had to add variable scope to the graph in order to be able to reuse the graph between pruning runs, I hope that didn't mess up anything elsewhere in the system.
I had to revert the pull request because the addition of variable scope does not mix well with how we currently save and reload models. The issue arises when loading a previously saved model separately, because the variable names are changed (some with "dpp/" in front, others with "dpp/dpp"...). This caused several other scripts we had to break.
I had a couple of hacky fixes but nothing that seemed great. For now I am just reverting this and will come back to it when I have more time. I think you will need to make another pull request as this one has become closed after I initially accepted it.
Ok, unfortunately I'm heading to BC for a week so I won't be able to look at this at all. Can you set variable scope with an empty string? Maybe that would be an easy fix, otherwise we need a way to be able to reuse the layer weights in between runs, which worked fine before you guys added the hyperparameter search changes. Maybe just saving and reloading it every time? It's not ideal but it might work.
Implemented DeepCompression
Models can now be compressed using iterative pruning after creation. They can also be compressed further using tensorflow's supported quantized models, however the tensorflow implementation does not currently support all of the functionality required for this to work with most models.