-
Checklist
- [ ] I've prepended issue tag with type of change: [bug]
- [x] (If applicable) I've attached the script to reproduce the bug
- [ ] (If applicable) I've documented below the DLC image/doc…
-
## Description
Currently, the options for [Transformers-NeuronX Engine in LMI](https://docs.djl.ai/master/docs/serving/serving/docs/lmi/tutorials/tnx_aot_tutorial.html) don't include the possibility …
-
### Your current environment
```text
Collecting environment information...
/opt/conda/lib/python3.10/site-packages/transformers/utils/hub.py:124: FutureWarning: Using `TRANSFORMERS_CACHE` is deprec…
-
Hi, I'm trying to make compatible a Clip model using neuron-distributed (because I'm gonna continue with a multimodal after it)
Currently in my notebook, insidea inf2.xlarge ubuntu 22, I have:
…
-
Hi,
after i did the conversion from ann to snn the converted snn model is look like this. However i cannot see the snn tailor part with snn_model.named_parameters() method. Actually i want to know,…
-
I am trying to finetune llama3-70B on trn132xlarge using distributed training. It failed with following error:
Container image: f"763104351884.dkr.ecr.{region}.amazonaws.com/pytorch-training-neur…
-
Hello, I get a problem for running neuron mode of my model. Here are the details:
## Context
runing the model (https://github.com/suny-downstate-medical-center/S1_netpyne/tree/coreneuron/sim) with n…
-
Converting the loaded model using to_neuron() method takes a long time. Is there any way to Save the neuron_model on disk and load it again? This is for GPT-NeoX.
-
I have an environment that supports both torch-neuronx and torch-xla. When I do simple utility calls I can confirm that the XLA device is being detected and is accurately identified as Neuron. However…
-
Add Stochastic Parallelizable Spiking Neuron model.
Paper:
https://arxiv.org/abs/2306.12666#:~:text=In%20this%20paper%2C%20we%20propose,run%20in%20parallel%20over%20time.
Torch implementation:
…