-
This is a request to add support for Multi-chassis LAG configuration and monitoring using Openconfig. Support for LAG already exists.
-
### Discussed in https://github.com/gama-platform/gama/discussions/121
Originally posted by **chapuisk** March 12, 2024
Hi there,
Using the 1.9.3 release, I notice that the amount of "false…
-
In a multi-agent setting, when training e.g. `MAPPO_Agents()`, then calling `MAPPO_Agents.save_model(model_name='model.pth')` and finally loading the model `MAPPO_Agents.load_model(path)`, how can I e…
-
**How to customise the train.sh for a distributed Mamba Training ?**
Hello,
As i've seen in the megatron modules, there isn't a pre-defined bash script to pre-train a mamba model on multi-gpu, ho…
-
I see that the multi-model models in the example all use TensorRT directly to deploy vision encoders, why not use TensorRT-LLM? Are there known issues or challenges associated with integrating Context…
-
Training Loss, Generated Outputs.
I hope this will be a reference for model training.
https://api.wandb.ai/links/xi-speech-team/k0kdfwch
-
Is multi-machine training of large models suitable for multi-node large models? Secondly, can the large model be divided into blocks and allocated to each node for training? For example: Chatglm3 larg…
-
If I understand correctly, autoregressive model has a loss, and also multi-task dense layers followed autoregressive model has a weighted loss. How to combine them?
And in ranking model, how to calc…
-
I am trying to train this model on a custom dataset. With my current resources, I am unable to train the model on 1 GPU. Does this model support multi-gpu training?
-
Multiple threads receive stock ticker data, store it in the queue, and a single thread takes out data from the queue for processing, and finds that it is not strictly in accordance with the order of c…
fxsts updated
1 month ago