Open thorellk opened 3 years ago
It would be nice if we could decouple the environment specifications from the time specifications somehow. We would encounter the same issue in the gandalf config as well if we had such large input files, and it feels like it could quickly get messy if we need to keep multiple versions of each system profile.
Hello team,
I am not sure how the cluster environments work, but perhaps we could explore computing the time limit dynamically?
Something like (pseudocode)
time = 20.m * task.attempt
Yeah, that would be an alternative @abhi18av, if one can make some benchmark how much time it needs per 100 Mb input file or something. It should "only" be fastp, shovill and Kraken that will be affected by input file size...
I think it should be possible to implement it so it actually reads the size of the input file for the process and computes the time allocation based on that. I think that, perhaps in combination with an extension on failures, would make sense and make it more effective. The alternative that has been proposed already would unnecessarily spend core hours for users with mainly (too) large samples on failed attempts that would then be increased only after failing.
Hi team,
I came across this possible solution somewhere else and perhaps we could explore this here
time { 20.m * sample.size() }
Neat. I wasn't aware of that functionality!
I guess we would need to make some simple calculation using the sample size to come up with a good multiplier for the time, perhaps also modulo some value so we don't end up with weird allocation request. There should also be a minimum size allocation as well I think :). Can you guys help me come up with something that would make sense? I'm thinking a "normal" sample would result in a time allocation of 20.m
(i.e. the multiplier would be 1), but larger samples would increase in whole integer steps depending on the size of the sample file to 20.m * 2
(medium sized file), 20.m * 3
(large file) etc.
Not sure if sample.size()
would work in our context, as there is no object in the FASTP
process definition called sample
. We might have to see if it works with path
objects instead of file
(i.e. reads[0].size()
), or consider rewriting the process definition slightly to use file
instead of path
.
We might have to see if it works with path objects instead of file (i.e. reads[0].size())
We could rely on toFile
[method](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/nio/file/Path.html#toFile()) for a java Path
object.
Though, I think it'd be best if we encapsulate this functionality into a function or a closure in Groovy.
The final solution would look something like
time computeTime()
Would someone have time to prototype something around this?
I think that the function should look something like this, however I can't think of a way I can test this on my infra.
def computeTime (inputPathObject) {
fileObject = inputPathObject.toFile()
fileSize = fileObject.size()
factor = fileSize % 3
if(factor == 0) {
return 20.m
}
return 20.m * factor
}
This function might need to be adapted based on the test runs.
Hi!
Emilio presented his masters thesis today but will continue working with me over the summer :) So we’ll start to dig into this in next week!
//Kaisa
On 10 Jun 2021, at 10:25, Abhinav Sharma @.***> wrote:
I think that the function should look something like this, however I can't think of a way I can test this on my infra.
def computeTime (inputPathObject) { fileObject = inputPathObject.toFile() fileSize = fileObject.size() factor = fileSize % 3
if(factor == 0) { return 20.m } return 20.m * factor
} — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/ctmrbio/BACTpipe/issues/169#issuecomment-858421464, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADD5TGGUWO2FNIVPAY5F3WDTSBZH3ANCNFSM45EFVL6Q.
Congrats to @emilio-r ! 🎉
I just bumped into the issue of having excessively large raw fastq files per sample which leads to that
fastp
times out with the current allocations in the rackham config. Since this is a quite unusual situation, I don't think it is necessary to change the default one but it would be good to have one more for "fat" datasets. What do you think?