mlsample / hairygami_umbrella_sampling

0 stars 0 forks source link

Some things which do not work #1

Open francescomambretti opened 4 weeks ago

francescomambretti commented 4 weeks ago

Hi, here is a list of minor and major issues with the code. Minor issues:

Major issues:

mlsample commented 3 weeks ago

Hi Francesco, thank you for being interested in the project and helping to find the current bugs in the code! I will write up a more comprehensive response to your much appreciated finding of bugs to further respond to them with solutions. Additionally, I will post a docker container environment to further test the code in to have some consistency there.

mlsample commented 3 days ago

I added a docker image so my code is reproducible: https://hub.docker.com/repository/docker/mlsample/hairygamiumbrellasampling/tags

   docker pull mlsample/hairygamiumbrellasampling:latest
   docker run --rm --gpus all -it -p 8888:8888/tcp hairygamiumbrellasampling:latest

I'll address the major issues first:

  1. The rest of the notebook code was added to the python script. I ran the code within a container, but hopefully it works on your workstation
  2. I am unsure why the join=True is not working for you. This is not really a fix to the underlying problem you are having, but you can use simulation_manager.worker_manager() to run it so that it will surely block.
  3. Not too sure about this one, can you provide more details?
  4. Assuming you are using the code I wrote that was in the previous version of the code, I do know why the melting temp could potentially change by such a large amount based on the box size, but it also depends on the xmax parameter you use and the scale of the box size change. Of course, the melting temp is a function of the DNA concentration, but you would really only see meaningful changes (more than 1 or 2 degrees) if the box size changed by many orders of magnitude. If you are changing the box size without changing xmax, without post processing analysis there should be no difference.

Small issues: 1.I should lint the code, but for now I'll keep it as is.

  1. I should comment the code more, but for now, I will release a website with documentation of the important features.
  2. Nvidia-mps in the context of a job sheduler (slurm) should be in the batch script. Not in a job sheduler, you can run the nvidia-mps daemon locally in a terminal with cudatoolkit. The name of the pipe and log dir don't need to include any key works, but should be unique to each daeomon instance
  3. The oxDNA executable that is being used by my code depends entirely on oxpy, and as such, depends on which python virtual env are you in when you run cmake when first building the oxDNA code. The executable which oxpy is linked to is the executable which will be build in the make step.
  4. A dedicated conda env is a good idea, however related to the issue above, you would need to build oxDNA while that env is active, making such a task somewhat less relevant, as ipy_oxdna depends on oxpy. I think a docker container fills this need better. However, if you know another solution I would be interested.
  5. The default options can be found in defaults.py