Scientific workflow engine designed for simplicity & scalability. Trivially transition between one off use cases to massive scale production environments
I'm having trouble getting call caching to work with Singularity and SGE, and I'm wondering if anyone has a working example config or some pointers. My config is below, minus passwords and specific paths/urls, which I've replaced with a label encased in <>. I've tried switching to slower hashing strategies finagling with the command construction to no avail. If there's not an obvious solution, is there an easy way to debug this? There are no network issues preventing connections to dockerhub - pulling images and converting to .sif works fine. It's only call caching that's broken.
Even when I see, in the metadata, identical hashes for the docker image and all inputs and outputs, I see a "Cache Miss" as the result, every time.
The call caching stanza in my metadata looks like this, for example. Am I missing something?
I've never used Cromwell this way but my understanding is that good call caching performance is heavily dependent on cloud object storage. This is because it returns checksums in a short, constant time.
I'm having trouble getting call caching to work with Singularity and SGE, and I'm wondering if anyone has a working example config or some pointers. My config is below, minus passwords and specific paths/urls, which I've replaced with a label encased in <>. I've tried switching to slower hashing strategies finagling with the command construction to no avail. If there's not an obvious solution, is there an easy way to debug this? There are no network issues preventing connections to dockerhub - pulling images and converting to .sif works fine. It's only call caching that's broken.
Even when I see, in the metadata, identical hashes for the docker image and all inputs and outputs, I see a "Cache Miss" as the result, every time.
The call caching stanza in my metadata looks like this, for example. Am I missing something?