When training torchcrf module version (0.3.1) using pytorch version 0.2.4, I encounter NaN loss from forward computation.
possibly because CRF parameters not initialized before
Here is the log when passing one instance from lstm module 300 dims to
nn.Linear then to torchcrf module
When training torchcrf module version (0.3.1) using pytorch version 0.2.4, I encounter NaN loss from forward computation. possibly because CRF parameters not initialized before
Here is the log when passing one instance from lstm module 300 dims to nn.Linear then to torchcrf module