Michael0x2a / nlp-capstone

1 stars 0 forks source link

Refactor models so they all follow the same interface #3

Closed Michael0x2a closed 7 years ago

Michael0x2a commented 7 years ago

We'll probably follow approximately the same interface as the character n-gram model.

briankchan commented 7 years ago

So, I'm doing something weird in the character n-gram model right now where I create a loss op once when the classifier is created, then once again every time new training starts; IIRC making the first op had something to do with getting something working properly with saving or summaries, but I'm not entirely sure anymore; I can try to see again if this is really necessary. It's possible that it makes more sense to just pass in training parameters and such when creating the classifier, so the training and loss ops only need to be created once.

Also, I'm pretty sure that the way I'm doing things now, saving and reloading on a new classifier, then trying to train more, will end up resetting to the default loss op; and if there's a new training op for each loss op, I don't think the variables for Adam will get restored properly (but that probably only affects training time).