-
The results of running [https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/PyTorch-Models/Model/llama2](url) are as follow:
ipex-llm/python/llm/example/GPU/PyTorch-Models/…
-
### System Info
```shell
accelerate==0.22.0
auto-gptq==0.4.2+cu118
optimum==1.13.1
torch==2.0.1+cu118
torchaudio==2.0.2+cu118
torchvision==0.15.2+cu118
transformers==4.33.1
Running in…
-
### Source / repo
https://blog.ovhcloud.com/fine-tuning-llama-2-models-using-a-single-gpu-qlora-and-ai-notebooks/
### Model description
Llama2
### Dataset
databricks
### Literature benchmark source…
-
### 🐛 Describe the bug
Just like OpenAI or GPT4All, Llama2 apparently needs a model to work. It could also be an issue with Replicate as the provider.
1. find out if this is a replicate or llama2 …
-
While I pulled already llama2:7b , I wanted to install llama2 (without the 7b tag). My understanding was that it was the same exact model (same hash), so maybe ollama would install only the metadata f…
-
I tried to reproduce the evaluation on ToxiGen Dataset, but failed (both Llama-2-7b-hf and Llama-2-13b-hf)
shots: 6-shots
dataset: https://github.com/microsoft/SafeNLP/blob/main/data/toxiGen.json
…
sdujq updated
11 months ago
-
I am using llama2 for calibration, and the following error will be reported.
main branch
commit id 6cc5e177ff2fb60b1aab3b03fa0534b5181cf0f1
![image](https://github.com/NVIDIA/TensorRT-LLM/assets/1…
-
what are the minimum hardware requirements to run the models on a local machine ?
### Requirements
- CPU :
- GPU:
- Ram:
### For All models.
- Llama2 7B
- Llama2 7B-chat
- Llama2 13B…
-
An error occuring during inference on output = model.generate(**inputs, max_new_tokens=512, temperature=0.1) which results in
in GenerationMixin._sample(self, input_ids, logits_processor, stopping_cr…
-
Fresh git clone
Macbook Pro M2 Cihp
Command: `python ingest.py --device_type mps`
```
2023-10-15 14:07:26,913 - INFO - ingest.py:121 - Loading documents from /Users/******/Desktop/localGPT…