-
Hi,
I am trying to run ananse (pip virtual environment; last updated yesterday) on a 64thread / 110G RAM machine. While ananse binding runs fine, ananse network progresses up to ~95% of network con…
-
-
**Description:** There are lots of labs desktop in our college and they are lying idle for most of the time. The idea is to create a cluster with them so that we can use their computing power whenev…
-
I am curious what is required to apply this method to the 70B parameter version of the llama2 model?
On reddit, noticed you mention: "For training, these models barely fit in 128 80GB A100s using Dee…
ghost updated
10 months ago
-
1. Prepare all material to investigate how to change Program. What's the questions left to answer in order to change program? What's your current course sequence and what would be new course sequence?…
YaleL updated
3 years ago
-
Hi,
I would be quite keen on having parallel training for learning networks.
I have seen that there may be a plan to use Dagger.jl [in this issue](https://github.com/alan-turing-institute/MLJ.j…
-
## Issue description
There are a slew of new AI hardware accelerators available that can beat the GPUs handily either in terms of raw performance or performance per Watt. There are also many new ef…
-
-
HI,
Thanks for sharing the code, we have read your paper, excellcent work. We hope to use your methods to analysis our metagenomic datasets, but we meet some challenges:
We only have 5 metagenoic samp…
yxxue updated
8 years ago
-
when I run the code following this command
```
python main.py --config config/meta_portrait_256_pretrain_warp.yaml --fp16 --stage Warp --task Pretrain
```
I met this error
```
start to train...…