Closed jchodera closed 5 years ago
John: Thanks for comments. We will keep this in mind. Let me know if yoy would like to play with ANI in a Docker container. We have a prototype version.
On Fri, Jul 6, 2018 at 21:35 John Chodera notifications@github.com wrote:
Just curious why you're distributing binary libraries instead of a platform-portable version here. Is the actual code that could be targeted to multiple systems available in another repo?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/isayev/ASE_ANI/issues/19, or mute the thread https://github.com/notifications/unsubscribe-auth/ADDzLV0sXLpfY4AxySPRcn9tnKEh0G4Tks5uD7wVgaJpZM4VF704 .
A docker container at least enables reproducibility and portability, though not understanding and innovation. I think a docker container would still be useful for us to play around with! Do you have one on dockerhub?
Could these custom models be transferred over to standard TF/Keras/PyTouch backends, would make transferability and accessibility an order of magnitude higher.
Looks like someone has already done this!
Cool, the resulting forcefield would be slightly different. Close enough for trial runs however.
HI guys. We have a beta version in pytorch and will release it soon. The khan version is not quite ANI. The Deepchem also is not. The basis are there, but the devil is in the details of activation functions, epochs, gradients, etc etc
The basis are there, but the devil is in the details of activation functions, epochs, gradients, etc etc
So the paper is useless without releasing all the code, since it's an incomplete description of what you actually did? :)
Sounds like a great argument for releasing the code!
It is actually the COMPLETE opposite. In the case of Deepchem for instance, they implemented whatever the hell they felt like, even when the paper was VERY clear about our choices. They published a paper showing that ANI was bad, and it turned to have been their own mistakes. That is what I meant. Take khan or deepchem, read our paper carefully and implement OUR choices there. Then the comparison is fair
Got it! Thanks for the clarification!
@jchodera paper has a complete and honest description of what we did. Not every one pay attention to full technical details in SI or simply made their own informed decision on those parameters.
Like, wtf these guys are crazy to use gaussian activation function, let me use laters greatest ReLU etc.
Awesome. Looking forward to the pytorch
implementation then!
We clarified the use of the gaussian activation function with you guys here:
https://github.com/isayev/ASE_ANI/issues/9
In addition to the arxiv paper:
https://arxiv.org/pdf/1610.08935.pdf
Which specifically mentions:
"All hidden layer nodes use a Gaussian activation function[47] while the output node uses a linear activation function."
PS - we're also looking forward to the pytorch version!
@jchodera @proteneer @dgasmith Pytorch ANI is now available: https://github.com/aiqm/torchani
Just curious why you're distributing binary libraries instead of a platform-portable version here. Is the actual code that could be targeted to multiple systems available in another repo?