-
# Bug description
I have been using SLEAP for almost a year on my institute's computing cluster to train a multi-animal topdown model and predict instances in videos. Previously, the cluster utiliz…
-
Is it possible to programmatically shutdown a kernel in JupyterLab?
I tried the solution proposed for Jupyter in [jupyter/notebook#1880](https://github.com/jupyter/notebook/issues/1880), but it doe…
-
For job executed on cluster, I wonder what are the differences between `mem_mb` and `disk_mb`.
While running my pipeline on SLURM, I used something like `snakemake --cluster "sbatch --account=myaccou…
-
I was able to build the same version using spack-stack-1.4.0 on C5 following @climbfuji 's instruction. I could not find spack-stack-1.4.0 on F5, but see 1.4.1. I tried it and got problems in compilat…
-
Any way to get this to run on multiple GPUs simultaneously?
Right now it only ones on a single GPU even when there are multiple present. Any flags I might try?
-
I have mulled this over myself quite a bit and with other admins, and am throwing it out here for wider discussion. As far as I know, I'm the only person in this situation, but maybe there are others …
-
Hi All,
I would like to thank the team for the previous help that I received my other question concerning duplications. That really helped a lot.
I have unrelated question concerning Canu. I am…
-
It would be great to have icons for `.sbatch` files that are used to identify files containing jobs to a SLURM scheduler. `.sbatch` files are not an official file format but commonly refer to these fi…
-
Hi Xihao,a very impressive job!!but when I run the program to this point--“STAARpipeline_Null_Model.r”,I get this error
`Iteration 20 :
Error in h(simpleError(msg, call)) :
error in evaluating t…
-
# Description
I am running a C++ program that sends input tensors of size 1000 by 6 to a pytorch model using smartsim and retrieves output tensors of size 1000 by 1. I initialize the smartsim/smartre…