Closed tautomer closed 1 year ago
Hiya.
1) let me know when tests are done and we can merge in.
2) I do not object to putting test scripts in the repo. At some hopefully-not-too-far date we can use an automated testing tool. For now I would suggest we put them in a new directory /tests/
Thanks!
let me know when tests are done and we can merge in.
I will do more tests during the holiday.
2. I do not object to putting test scripts in the repo. At some hopefully-not-too-far date we can use an automated testing too
I have branch in my fork called unittest. Still very basic. For tesing realoding, etc, we have to include some files.
Done testing.
I think both changes are good.
Both should be working fine, but I think I should do more tests at this moment, hence the "WIP" tag.
Restarting
It turns out that there are some bugs in the new implementation of restarting.
Reloading model is suffered from 1) as well.
After fixing restart (again), the logic will be like this
The old code will treat the first case as the 4th one, which will throw an error. For scenario 2, model_device variable is unset, so the device is now determined from one tensor in the checkpoint. 2 and 3 are in the same if, so the same treatment. We can probably add one more if so model_device is only checked again if it's scenario 2.
Time for creating the unit tests? When testing manually, I forgot to include scenario 1.
Dipole
The multi-target version implementation looks fine. Training only dipole, I get different histograms if comparing state by state between 5 single-target nodes and one 5-target node, but the histograms do look similar. I believe it's working, but let me collect more evidence to be sure.