OverLordGoldDragon / see-rnn

RNN and general weights, gradients, & activations visualization in Keras & TensorFlow
MIT License
177 stars 21 forks source link

Model comparison, interpretation questions #11

Closed ghost closed 4 years ago

OverLordGoldDragon commented 4 years ago

@joy901 I don't do e-mail help, and this isn't how one contacts developers in general. If your inquiry is simple or relevant to the repository, you can open an Issue, else you're better off asking on StackOverflow (which you can link, and I may have a look).

Closing this issue, feel free to open another with relevant title and description.

OverLordGoldDragon commented 4 years ago

@joy901 You can reply as a comment and tag me, @OverLordGoldDragon (unless your reputation is too low) - which question is it? You can just 'reply' here, keeping in mind what I said earlier (no rep requirement for posting questions on SO).

OverLordGoldDragon commented 4 years ago

@joy901 Unsure which "three" you refer to; those under Model comparison are steps of ultimately a single procedure. I'll respond regarding Interpreting weights; each is its own topic to be studied. Depending on the level of detail you seek, each is either its own SO question, or a chapter to read. I'll link you some reading material - if that doesn't suffice, you can open an SO question on it and link it here (though no guarantees I'll respond):

  1. Sparsity: Sparse Autoencoders (pg. 14) -- Sparse Deep Stacking Network for Image Classification (abstract, other sections)
  1. Health: Data Science SE -- nan weights make a model untrainable. Example

  2. Stability: Gradient clipping -- Weight constraints

OverLordGoldDragon commented 4 years ago

@joy901 Yes:

ghost commented 4 years ago

Thank you so much @OverLordGoldDragon for helping. I will delete the comments

OverLordGoldDragon commented 4 years ago

@joy901 You're welcome, good luck