-
1. **Lab Name**: Neurophysiology Virtual Lab
2. **List of Experiments and Repositories**:
a. Voltage Clamp Technique
https://virtual-labs.github.io/exp-voltage-clamp-au
v1.0.0
b. Simple …
-
### What happened + What you expected to happen
### What Happened:
When deploying models using RayServe with autoscaling enabled on Amazon EKS, specifically across multiple `inf2` nodes, the syste…
-
Hi, I'm following the sample [here](https://github.com/aws-neuron/aws-neuron-sagemaker-samples/blob/master/inference/inf2-bert-on-sagemaker/inf2_bert_sagemaker.ipynb) to try to compile a model to Neur…
-
### System Info
```shell
TGI Image: ghcr.io/huggingface/neuronx-tgi:0.0.23
Platform:
- Platform: Linux-5.15.0-1031-aws-x86_64-with-glibc2.35
- Python version: 3.10.12
Python packages:
…
dlptv updated
1 month ago
-
Teorically it you're using transformers, it is possible to train in aws neuron instances (trn1)
With optimun neuron should be possible https://huggingface.co/docs/optimum/main/en/index, https://hug…
-
https://github.com/aws-neuron/neuronx-distributed/blob/a80091de6c9d8eb75f96a7367e143a81d586fbbc/examples/inference/llama2/neuron_modeling_llama.py#L36
The llama inference example needs to be update…
-
### Bug summary
From pre-trained multi-head model, `dp --pt change-bias` will give a model with much larger size. However, finetuen with `numb_steps: 0` will have no problem:
```
(base) [2201110432@w…
-
This is not a bug, but rather a feature request: even when pre-compiled artifacts are available, loading a model on neuron cores can take a very long time.
This seems especially true when loading a…
-
该函数的引用方式:
from paddleslim.nas.ofa.utils import nlp_utils
该函数的原文:
def compute_neuron_head_importance(task_name,
model,
data_lo…
-
## Description
Unable to use open-ai endpoint, getting the error below.
### Error Message
PyProcess W-100-model-stdout: The following parameters are not supported by neuron with rolling batch: {'…