Currently, we use only the models (i.e. neural network architecture, etc) supplied with GenoCAE, stored in the Singularity container.
However, there is need to create models on-the-fly.
The problem is that GCAE looks for the models relative to that script's location. To work on Bianca, however, I've put GCAE in a Singularity container, where that script is in /opt/gcae. This means I cannot add models on the fly, as the container will -rightfully!- prevent me from doing so.
After #9, run_gcae.py can be run from any location. However, it must be accompanied by the models needed. Allow gcaer to run GCAE from any location, by also installing/copying/creating those model files in the same relative paths.
Depends on:
Currently, we use only the models (i.e. neural network architecture, etc) supplied with GenoCAE, stored in the Singularity container.
However, there is need to create models on-the-fly.
The problem is that GCAE looks for the models relative to that script's location. To work on Bianca, however, I've put GCAE in a Singularity container, where that script is in /opt/gcae. This means I cannot add models on the fly, as the container will -rightfully!- prevent me from doing so.
After #9,
run_gcae.py
can be run from any location. However, it must be accompanied by the models needed. Allowgcaer
to run GCAE from any location, by also installing/copying/creating those model files in the same relative paths.