Closed elizabethchiyka closed 2 years ago
Was this on the sequence model training step? Are you using softmax final_activation?
Yes it was on sequence model training, and no I was not using softmax final_activation. Edited original post to add very top of output, including sequence weight warning. Not sure if this is the root of the issue.
This is odd, because elapsed should never be 0. maybe the batch didn't actually run properly? I added an epsilon in the bug_feb2022
branch, should merge in the next few days.
should be fixed with e2df196, re-open if it doesn't fix it for you. pip install --upgrade deepethogram
No issues before this in training, does anyone have suggestions on what the issue could be/things to try?