Open jdakka opened 6 years ago
What is the requirement we are not satisfying? Having a MPI version of NAMD on Titan?
@dww100 could you verify if this is what you meant by your original request?
@jdakka can you send me what I originally said?
I can imagine several things this might refer to but I can't remember which specifically this was about:
@dww100 the requirement from your email is as stated in the title Specific NAMD compilation on Titan?
You'll have to check the software link that I included for a NAMD
that is specific to your requirements. To use ORTE
and achieve the 3750 nodes requirement, it is best to use a non-MPI version of NAMD. I checked that link, and there is a multicore-CUDA non-MPI
version available.
You'll have to check the software link that I included for a NAMD
that is specific to your requirements. To use ORTE
and achieve the 3750 nodes requirement, it is best to use a non-MPI version of NAMD. In order to run TIES, you need a non-CUDA, non-MPI
multicore binary, which is also available in the link.
In our past runs with you, we generally mapped a single replica to a NAMD
executable. I see the frustration that this would lead in order to reach the 3750 nodes to hit the desired wall time, but I see this as a new requirement that would require some digging to see which NAMD
binary would allow for multiple replicas to run in a single NAMD
executable.
ORTE
on Titan, it is best to use a non-MPI compiled NAMD binary. This link has NAMD software available. The multicore NAMD version is non-MPI.