kedartatwawadi / NN_compression

MIT License
213 stars 42 forks source link

Will you kindly provide you training parameters? #8

Open suikammd opened 6 years ago

suikammd commented 6 years ago

Sorry, I‘m new to train lstm. Will you kindly provide you training parameters? Since I can't get results as you provided. Or maybe it's because you didn't provide arithmetic coding block?

kedartatwawadi commented 6 years ago

Hello, the loss function should give the compression performance (bits/symbol) which would be achieved if arithmetic coding is used.

See the updated README and https://web.stanford.edu/~kedart/files/deepzip.pdf for more info. The default parameters in src/models/char_rnn.py worked well. If you want to explore hyperparameters, consider using src/scripts/run.py by appropriately modifying for your needs. (some examples are in the same folder)

suikammd commented 6 years ago

@kedartatwawadi So you finally use src/scripts/run.py to train DNA datasets and the model in src/models/char_rnn.py is your GRU-feature models? Will you kindly add a clear statement and tell us which one is your best performing network? I am trying to make it work on FPGA (hardware implementation, for personal use). Thanks a lot!