radical-cybertools / ExTASY

MDEnsemble
Other
1 stars 1 forks source link

Failure of Amber/CoCo on Stampede when numIterations = 9 #146

Closed ashkurti closed 9 years ago

ashkurti commented 9 years ago

extasy log at: https://gist.github.com/ashkurti/69a32188159e0f4d81ae

According to the current specific requirement if numIterations = 9 we would expect 4 folders (iter2, iter4, iter6, iter8) in the backup folder containing the corresponding trajectory and coco log files.

In this case I notice only the iter2, iter4 and iter6 folders but not the iter8 folder. The run took not more than 20 minutes, therefore the walltime should not be a problem this time.

ashkurti commented 9 years ago

Investigating files created on stampede, at /work/02998/ardi/radical.pilot.sandbox/pilot-54e1d9f8f8cdba3f96beee22/unit-54e1dcacf8cdba3f96beef01:

$ ls
logfile  mdshort.in  min6.out   radical_pilot_cu_launch_script-h3QEDO.sh  STDOUT
md6.crd  min6.crd    min.in     radical_pilot_cu_launch_script-KCunzS.sh
md6.out  min6.inf    penta.top  STDERR
$ more STDERR
ln: creating hard link `./STDERR': File exists
ln: creating hard link `./STDOUT': File exists
[cli_0]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
[c403-002.stampede.tacc.utexas.edu:mpispawn_0][readline] Unexpected End-Of-File on file descriptor 6. MPI process died?
[c403-002.stampede.tacc.utexas.edu:mpispawn_0][mtpmi_processops] Error while reading PMI socket. MPI process died?
[c403-002.stampede.tacc.utexas.edu:mpispawn_0][child_handler] MPI process (rank: 0, pid: 86946) exited with status 1
ln: accessing `md6.ncdf': No such file or directory
[c403-002.stampede.tacc.utexas.edu:mpispawn_0][report_error] connect() failed:  (111)
$ more STDOUT 
TACC: Starting up job 4865579
TACC: Setting up parallel environment for MVAPICH2+mpispawn.
TACC: Starting parallel tasks...
TACC: MPI job exited with code: 1

TACC: Shutdown complete. Exiting.
$ more logfile 

Major Routine Parallel Profiling - NonSetup CPU Seconds:

           D                                                              
           a                                                              
           t                                                              
           a                              D                               
           D         N                    i                               
           i         o                    h                               
           s         n             A      e      S      R      O         T
   T       t         B      B      n      d      h      u      t         o
   a       r         o      o      g      r      a      n      h         t
   s       i         n      n      l      a      k      M      e         a
   k       b         d      d      e      l      e      D      r         l
------------------------------------------------------------------------------
   0     0.0       0.0    0.0    0.0    0.0    0.0    0.0    0.0       0.0
   1     0.0       0.0    0.0    0.0    0.0    0.0    0.0    0.0       0.0
------------------------------------------------------------------------------
 avg     0.0       0.0    0.0    0.0    0.0    0.0    0.0    0.0       0.0
 min     0.0       0.0    0.0    0.0    0.0    0.0    0.0    0.0       0.0
 max     0.0       0.0    0.0    0.0    0.0    0.0    0.0    0.0       0.0
 std     0.0       0.0    0.0    0.0    0.0    0.0    0.0    0.0       0.0
------------------------------------------------------------------------------

GB NonBond Parallel Profiling - NonSetup CPU Seconds:

                                 O         D          
                                 f         i          
             R                   f         s         T
   T         a         D         D         t         o
   a         d         i         i         r         t
   s         i         a         a         i         a
   k         i         g         g         b         l
------------------------------------------------------
   0       0.0       0.0       0.0       0.0       0.0
   1       0.0       0.0       0.0       0.0       0.0
------------------------------------------------------
 avg       0.0       0.0       0.0       0.0       0.0
$ more radical_pilot_cu_launch_script-h3QEDO.sh
#!/bin/bash -l
cd /work/02998/ardi/radical.pilot.sandbox/pilot-54e1d9f8f8cdba3f96beee22/unit-54e1dcacf8cdba3f96beef01
ln /work/02998/ardi/radical.pilot.sandbox/pilot-54e1d9f8f8cdba3f96beee22/unit-54e1dc9ff8cdba3f96beeef1/* .
module load TACC && module load amber
ln /work/02998/ardi/radical.pilot.sandbox/pilot-54e1d9f8f8cdba3f96beee22/unit-54e1da00f8cdba3f96beee24/mdshort.in .

/usr/local/bin/ibrun -n 2 -o 8 pmemd.MPI "-O" "-i" "mdshort.in" "-o" "md6.out" "-inf" "md6.inf" "-x" "md6.ncdf" "-r" "md6.rst" "-p"
 "penta.top" "-c" "md6.crd"
ln md6.ncdf /work/02998/ardi/radical.pilot.sandbox/pilot-54e1d9f8f8cdba3f96beee22/unit-54e1dc48f8cdba3f96beeeef/md_6_0.ncdf
$ more radical_pilot_cu_launch_script-KCunzS.sh
#!/bin/bash -l
cd /work/02998/ardi/radical.pilot.sandbox/pilot-54e1d9f8f8cdba3f96beee22/unit-54e1dc9ff8cdba3f96beeef1
module load TACC && module load amber
ln /work/02998/ardi/radical.pilot.sandbox/pilot-54e1d9f8f8cdba3f96beee22/unit-54e1dc48f8cdba3f96beeeef/min60.crd min6.crd
ln /work/02998/ardi/radical.pilot.sandbox/pilot-54e1d9f8f8cdba3f96beee22/unit-54e1da00f8cdba3f96beee24/penta.top .
ln /work/02998/ardi/radical.pilot.sandbox/pilot-54e1d9f8f8cdba3f96beee22/unit-54e1da00f8cdba3f96beee24/min.in .

/usr/local/bin/ibrun -n 2 -o 6 pmemd.MPI "-O" "-i" "min.in" "-o" "min6.out" "-inf" "min6.inf" "-r" "md6.crd" "-p" "penta.top" "-c" 
"min6.crd" "-ref" "min6.crd"
$ ls -l /work/02998/ardi/radical.pilot.sandbox/pilot-54e1d9f8f8cdba3f96beee22/unit-54e1dc48f8cdba3f96beeeef/
total 1800
-rw------- 1 ardi G-801782 71680 Feb 16 06:02 md_5_0.ncdf
-rw------- 1 ardi G-801782 71680 Feb 16 06:02 md_5_10.ncdf
-rw------- 1 ardi G-801782 71680 Feb 16 06:02 md_5_11.ncdf
-rw------- 1 ardi G-801782 71680 Feb 16 06:02 md_5_12.ncdf
-rw------- 1 ardi G-801782 71680 Feb 16 06:02 md_5_13.ncdf
-rw------- 1 ardi G-801782 71680 Feb 16 06:02 md_5_14.ncdf
-rw------- 1 ardi G-801782 71680 Feb 16 06:02 md_5_15.ncdf
-rw------- 1 ardi G-801782 71680 Feb 16 06:02 md_5_1.ncdf
-rw------- 1 ardi G-801782 71680 Feb 16 06:02 md_5_2.ncdf
-rw------- 1 ardi G-801782 71680 Feb 16 06:02 md_5_3.ncdf
-rw------- 1 ardi G-801782 71680 Feb 16 06:02 md_5_4.ncdf
-rw------- 1 ardi G-801782 71680 Feb 16 06:02 md_5_5.ncdf
-rw------- 1 ardi G-801782 71680 Feb 16 06:03 md_5_6.ncdf
-rw------- 1 ardi G-801782 71680 Feb 16 06:02 md_5_7.ncdf
-rw------- 1 ardi G-801782 71680 Feb 16 06:02 md_5_8.ncdf
-rw------- 1 ardi G-801782 71680 Feb 16 06:02 md_5_9.ncdf
-rw------- 2 ardi G-801782 71680 Feb 16 06:04 md_6_13.ncdf
-rw------- 2 ardi G-801782 71680 Feb 16 06:04 md_6_14.ncdf
-rw------- 2 ardi G-801782 71680 Feb 16 06:04 md_6_15.ncdf
-rw------- 2 ardi G-801782 71680 Feb 16 06:04 md_6_1.ncdf
-rw------- 2 ardi G-801782 71680 Feb 16 06:04 md_6_2.ncdf
-rw------- 2 ardi G-801782 71680 Feb 16 06:04 md_6_5.ncdf
-rw------- 2 ardi G-801782 71680 Feb 16 06:04 md_6_6.ncdf
-rw------- 2 ardi G-801782 71680 Feb 16 06:04 md_6_7.ncdf
-rw------- 4 ardi G-801782  2174 Feb 16 06:03 min60.crd
-rw------- 4 ardi G-801782  2174 Feb 16 06:03 min610.crd
-rw------- 4 ardi G-801782  2174 Feb 16 06:03 min611.crd
-rw------- 4 ardi G-801782  2174 Feb 16 06:03 min612.crd
-rw------- 4 ardi G-801782  2174 Feb 16 06:03 min613.crd
-rw------- 4 ardi G-801782  2174 Feb 16 06:03 min614.crd
-rw------- 4 ardi G-801782  2174 Feb 16 06:03 min615.crd
-rw------- 4 ardi G-801782  2174 Feb 16 06:03 min61.crd
-rw------- 4 ardi G-801782  2174 Feb 16 06:03 min62.crd
-rw------- 4 ardi G-801782  2174 Feb 16 06:03 min63.crd
-rw------- 4 ardi G-801782  2174 Feb 16 06:03 min64.crd
-rw------- 4 ardi G-801782  2174 Feb 16 06:03 min65.crd
-rw------- 4 ardi G-801782  2174 Feb 16 06:03 min66.crd
-rw------- 4 ardi G-801782  2174 Feb 16 06:03 min67.crd
-rw------- 4 ardi G-801782  2174 Feb 16 06:03 min68.crd
-rw------- 4 ardi G-801782  2174 Feb 16 06:03 min69.crd
-rwx------ 1 ardi G-801782   135 Feb 16 06:03 radical_pilot_cu_launch_script-_MaWUV.sh
-rw------- 1 ardi G-801782     0 Feb 16 06:03 STDERR
-rw------- 1 ardi G-801782     2 Feb 16 06:03 STDOUT
$ more  /work/02998/ardi/radical.pilot.sandbox/pilot-54e1d9f8f8cdba3f96beee22/unit-54e1dc48f8cdba3f96beeeef/STDERR
$ more /work/02998/ardi/radical.pilot.sandbox/pilot-54e1d9f8f8cdba3f96beee22/unit-54e1dc48f8cdba3f96beeeef/STDOUT
2
$ more /work/02998/ardi/radical.pilot.sandbox/pilot-54e1d9f8f8cdba3f96beee22/unit-54e1dc48f8cdba3f96beeeef/radical_pilot_cu_launch_script-_MaWUV.sh
#!/bin/bash -l
cd /work/02998/ardi/radical.pilot.sandbox/pilot-54e1d9f8f8cdba3f96beee22/unit-54e1dc48f8cdba3f96beeeef

/bin/echo "2"
ashkurti commented 9 years ago

So, according to what is generated and not generated on stampede, we can notice that amongst the md_6_0 to md_6_15 only 8 out of 16 files are generated. This might have happened due to a failed MD simulation, although there is no error warning in the extasy workflow for this. Probably, we might find a way to catch the MD failures and not generated files, if that happens ...

I will investigate this scenario again to see if this is a random error or if the failure point is always the same one :)

ashkurti commented 9 years ago

In addition I have copied the correspondent radical pilot folder at /work/02998/ardi/pilot-54e1d9f8f8cdba3f96beee22 on stampede and made it publicly accessible in case anyone wanted to browse through the files.

vivek-bala commented 9 years ago

In md6.out in unit-54e1dcacf8cdba3f96beef01, I notice the error,

| ERROR:   I could not understand line     3
************************************************************************

*s in the inpcrd file often indicate an overflow of the Fortran format used
to store coordinates in the inpcrd/restart files. This often happens when
particles diffuse very far away from each other. Make sure you are removing
center-of-mass translation (nscm /= 0) or check if you have multiple, mobile
molecules that have diffused very far away from each other. This condition is
highly unusual for non-periodic simulations.

I believe this is similar the error we got in #137.

ashkurti commented 9 years ago

@vivek-bala you are right, it is similar - same to #137 but Iain insisted on raising a new issue for this to avoid confusion so as we could freshly reason upon the errors.

vivek-bala commented 9 years ago

Ah ok, in that case should I close #137 ?

On Mon, Feb 16, 2015 at 11:20 AM, ashkurti notifications@github.com wrote:

@vivek-bala https://github.com/vivek-bala you are right, it is similar

— Reply to this email directly or view it on GitHub https://github.com/radical-cybertools/ExTASY/issues/146#issuecomment-74542605 .

ashkurti commented 9 years ago

So in the same folder, we have noticed that some CUs contain only one script - related to the minimization and some other CUs contain two scripts one related to the minimization and the other to the MD simulation process ... Example:

unit-54e1dca0f8cdba3f96beef00/:
total 80
-rw-------   2 ardi G-801782  2347 Feb 16 06:04 logfile
-rw-------   2 ardi G-801782  2241 Feb 16 06:03 md6.crd
-rw-------   4 ardi G-801782  2174 Feb 16 06:03 min6.crd
-rw-------   2 ardi G-801782   384 Feb 16 06:03 min6.inf
-rw-------   2 ardi G-801782 13318 Feb 16 06:03 min6.out
-rw------- 225 ardi G-801782   225 Feb 16 05:52 min.in
-rw------- 231 ardi G-801782 33603 Feb 16 05:52 penta.top
-rwx------   2 ardi G-801782   669 Feb 16 06:03 radical_pilot_cu_launch_script-QMSbBw.sh
-rw-------   1 ardi G-801782     0 Feb 16 06:03 STDERR
-rw-------   1 ardi G-801782   160 Feb 16 06:03 STDOUT

unit-54e1dcacf8cdba3f96beef01/:
total 92
-rw-------   2 ardi G-801782  2347 Feb 16 06:03 logfile
-rw-------   2 ardi G-801782  2241 Feb 16 06:03 md6.crd
-rw-------   1 ardi G-801782  2771 Feb 16 06:04 md6.out
-rw------- 113 ardi G-801782   216 Feb 16 05:52 mdshort.in
-rw-------   4 ardi G-801782  2174 Feb 16 06:03 min6.crd
-rw-------   2 ardi G-801782   384 Feb 16 06:03 min6.inf
-rw-------   2 ardi G-801782 11090 Feb 16 06:03 min6.out
-rw------- 225 ardi G-801782   225 Feb 16 05:52 min.in
-rw------- 231 ardi G-801782 33603 Feb 16 05:52 penta.top
-rwx------   1 ardi G-801782   665 Feb 16 06:04 radical_pilot_cu_launch_script-h3QEDO.sh
-rwx------   2 ardi G-801782   668 Feb 16 06:03 radical_pilot_cu_launch_script-KCunzS.sh
-rw-------   1 ardi G-801782   667 Feb 16 06:04 STDERR
-rw-------   1 ardi G-801782   194 Feb 16 06:04 STDOUT
marksantcroos commented 9 years ago

So in the same folder, we have noticed that some CUs contain only one script - related to the minimization and some other CUs contain two scripts one related to the minimization and the other to the MD simulation process ...

Hmmmm, thanks for bringing this up. @RADICAL group: Are we doing something funny in EnsembleMD or is this a genuine problem with RP?

vivek-bala commented 9 years ago

This is because of a ln cu_1/* cu_2/ statement to create links for the intermediate (possibly very large no. of) files between the two simulation kernels. I can make this more specific, but all files but the min.in are required in the second file I believe, hence the * as opposed to specific filenames.

marksantcroos commented 9 years ago

I was assuming/hoping this was resolved by now. I really recommend that you guys start using RP proper asap.

vivek-bala commented 9 years ago

I gave it a run and now it fails in 3rd iteration for me. Looking into the unit-folders,

Compute unit that fails executes :-

/usr/local/bin/ibrun -n 2 -o 14 pmemd.MPI "-O" "-i" "mdshort.in" "-o" "md3.out" "-inf" "md3.inf" "-x" "md3.ncdf" "-r" "md3.rst" "-p" "penta.top" "-c" "md3.crd"

md3.crd looks like : https://gist.github.com/vivek-bala/cb81cbb43962bd211bbe error seen in md3.out : https://gist.github.com/vivek-bala/af879e004bb4ad2f6684

Tracing back the generation of this (md3.crd) file, the CU/command which produces md3.crd :-

/usr/local/bin/ibrun -n 2 -o 2 pmemd.MPI "-O" "-i" "min.in" "-o" "min3.out" "-inf" "min3.inf" "-r" "md3.crd" "-p" "penta.top" "-c" "min3.crd" "-ref" "min3.crd"

min.in : https://gist.github.com/vivek-bala/c17012d03aae52b09816 min3.out : https://gist.github.com/vivek-bala/4541a0eb2e4eaea5bb3f min3.inf : https://gist.github.com/vivek-bala/0501f87f93115abde324 md3.crd : the output (https://gist.github.com/vivek-bala/cb81cbb43962bd211bbe) min3.crd : https://gist.github.com/vivek-bala/0cebe3d58890bf123a69

Does anything look wrong with the command that generates md3.crd ?

vivek-bala commented 9 years ago

@marksantcroos , I have been trying it (rather slowly) but with lots of files I was running into errors. Will bring this up asap.

ashkurti commented 9 years ago

@vivek-bala - Thanks for investigating on this. Similarly to what you observe, we have also observed while investigating through files of our failed CUs, that the energy values straight before the failure look enormous. Such energy values would certainly lead to the failure of a subsequent MD simulation.

Ideally, in our ExTASY workflow we might want to catch the error of a failed MD simulation. This might either be done investigating the generated files - ex. the md3.crd file generated should contain clearly readable information on the coordinates of the atoms - or even better the pmemd.MPI return code. Then, if some information on the path of the failed unit might be stored in the STDERR that would be optimal.

However, even if we would like to have this error catching, it does not appear to be the major problem of such cases. Charlie has been thinking about a possible solution that we might want to directly integrate into coco. This would involve carrying out the MD simulations from the coco-generated structures in a slightly different way from what we currently do, in order to obtain at the end MD simulations with more stable energy values and less likely to fail. This might also lead to slightly changing some of the pmemd commands and we would then need your collaboration on adopting these to the extasy workflow.

radical commented 9 years ago

Mark, You seem to have mentioned me by mistake.

marksantcroos commented 9 years ago

@radical Yes, sorry about that. RADICAL is our (non-github) group name.

vivek-bala commented 9 years ago

Ideally, in our ExTASY workflow we might want to catch the error of a failed MD simulation. This might either be done investigating the generated files - ex. the md3.crd file generated should contain clearly readable information on the coordinates of the atoms - or even better the pmemd.MPI return code. Then, if some information on the path of the failed unit might be stored in the STDERR that would be optimal.

I think saving the path of a failed unit should be possible but I'm not entirely sure about error analysis in a CU. So does that mean, we save the path of the particular failing CU and continue with the experiment ? Do we continue assuming (N(total CUs)-1(failed CU)) compute units for the further part of the experiment ?

If I understood correctly, this issue is to be solved in a future coco version but for now we should try for a temporary fix in ExTASY (?).

ashkurti commented 9 years ago

I think saving the path of a failed unit should be possible but I'm not entirely sure about error analysis in a CU. So does that mean, we save the path of the particular failing CU and continue with the experiment ? Do we continue assuming (N(total CUs)-1(failed CU)) compute units for the further part of the experiment ?

This would be an excellent error handling with many benefits. The main benefit is that the workflow continues without interruption up to the point of the user selection - ex. nr of iterations. This means that in the analysis step we should not consider the trajectory files related to the failing CU that have been erroneously generated or not generated at all. But in the next iterations nothing should prevent us to generate as many frontier points as required and to continue with as many CUs as normal. And, if we choose to investigate the failing units and related MD files we would know straight ahead where to go.

If I understood correctly, this issue is to be solved in a future coco version but for now we should try for a temporary fix in ExTASY (?).

It depends on how much effort does it take to implement this proposed ExTASY error handling, and it would be good to have the fix as a permanent one, it would improve the ExTASY error catching and make ExTASY more robust! It would be fantastic if we could develop this in the ExTASY workflows for the next release, and I believe that these changes will not lead to any changes in the user command line interface level.

And yes, we will optimise coco in order to minimize these kinds of situations, but we will opt to do this for a future release.

vivek-bala commented 9 years ago

Fixed in 70b68a9275796e558a64e0a29a5d052a94bfd415. New minimzation file added.

ashkurti commented 9 years ago

The coco/amber extasy workflow using the default files is not working properly in the same fashion as mentioned in #144. In the related radical.pilot folder (publicly accessible at /work/02998/ardi/radical.pilot.sandbox/pilot-54ef2905f8cdba5e6481a57e) no folders related to the computational units are present.

login3.stampede(52)$ ls /work/02998/ardi/radical.pilot.sandbox/pilot-54ef2905f8cdba5e6481a57e
default_bootstrapper.sh  radical-pilot-agent.py

extasy.log at https://gist.github.com/ashkurti/216a7c38a248301429b3

vivek-bala commented 9 years ago

This is fixed in master. Please reinstall and try again.

ashkurti commented 9 years ago

Thanks @vivek-bala, I tried after installing master, or did you just introduce new fixes in master.

vivek-bala commented 9 years ago

It was a case of missing indentation while moving some code. I made a commit in the last 10 minutes which should fix that. I tried from scratch and it finished successfully (I faced the same error before the fix).

Vivek On Feb 26, 2015 10:29 AM, "ashkurti" notifications@github.com wrote:

Thanks @vivek-bala https://github.com/vivek-bala, I tried after installing master, or did you just introduce new fixes in master.

— Reply to this email directly or view it on GitHub https://github.com/radical-cybertools/ExTASY/issues/146#issuecomment-76197683 .

vivek-bala commented 9 years ago

Fixed in c8473eaa8a19397ab1cd2c0c232a24d601d2e642

ashkurti commented 9 years ago

@vivek-bala after clearing and reinstalling everything again, and having the 0.1.3-beta-5-g83107c8 version of extasy, there are still problems with the coco/amber workflow using the default files. No trajectory files are in the backup directory, due to apparently a failure of finding the appropriate files.

extasy.log at https://gist.github.com/ashkurti/cbb6620579851ca1df0f and the radical.pilot folder publicly accessible at /work/02998/ardi/radical.pilot.sandbox/pilot-54f0566ef8cdba179a61a563

login3.stampede(60)$ pwd
/work/02998/ardi/radical.pilot.sandbox/pilot-54f0566ef8cdba179a61a563
login3.stampede(61)$ ls -l
total 124
-rwxr-xr-x 1 ardi G-801782  14620 Feb 27 05:35 default_bootstrapper.sh
-rwxr-xr-x 1 ardi G-801782 106287 Feb 27 05:35 radical-pilot-agent.py
drwxr-xr-x 2 ardi G-801782   4096 Feb 27 05:35 staging_area
login3.stampede(62)$ ls staging_area/
mdshort.in  min.in  penta.crd  penta.top  postexec.py
vivek-bala commented 9 years ago

The error seems to be possibly from the sftp transfer method. But I'm not sure why the error comes up though, the penta.crd file is already transferred and available in the staging_area. Was this experiment run from the testing account ?

vivek-bala commented 9 years ago

Also does this error repeat every time ?

ashkurti commented 9 years ago

No ... it was run from my own account that contains a file named ~username/.saga.cfg with the contents of

[saga.utils.pty]
ssh_share_mode = no
ashkurti commented 9 years ago

Did you do any tests from the testing account yourself ..

vivek-bala commented 9 years ago

I just tried it from the testing account, I didn't encounter any errors.

I'm not sure if saga.cfg makes the difference. @marksantcroos ?

andre-merzky commented 9 years ago

the saga config option was introduced to handle older sftp versions which are not able to use shared ssh channels. So, if your system is somewhat older, you may very well need that config setting. We encountered the problem only on some machines from nottingham until now - I am not sure how many installations are affected, really...

ashkurti commented 9 years ago

Was the fix carried out at the master branch ... during the reinstallation I installed everything from the master branches:

pip install --upgrade radical.pilot
pip install --upgrade git+https://github.com/radical-cybertools/radical.ensemblemd.mdkernels.git@master#egg=radical.ensemblemd.mdkernels
pip install --upgrade git+https://github.com/radical-cybertools/ExTASY.git@master#egg=radical.ensemblemd.extasy
vivek-bala commented 9 years ago

I believe the new staging directives are not available in the RP on pypi. Please try the RP on master pip install --upgrade git+https://github.com/radical-cybertools/radical.pilot.git@master#egg=radical.pilot. I have added this in the documentation as well : http://extasy.readthedocs.org/en/latest/pages/installation.html

ashkurti commented 9 years ago

Ok, great, thanks, now the coco/amber with default files works fine :+1: I am testing the other contexts now such as with num_iterations = 9 etc.

ashkurti commented 9 years ago

The coco/amber workflow with num_iterations=9 works for me now!!

vivek-bala commented 9 years ago

Great !