-
微博内容精选
-
Hi I use the commands in the README to run this project.
As you don't specify the model repo, I download model from here: https://huggingface.co/daryl149/llama-2-7b-chat-hf
The build process is…
-
### System Info
I'am on a Ubuntu server of https://console.paperspace.com/ with this 2 A100 GPU, but when i run the model CalderaAI/30B-Lazarus i don't cannot use the HF Transfer, even with the "--ne…
-
### Required Prerequisites
- [X] I have searched the [Issue Tracker](https://github.com/PKU-Alignment/safe-rlhf/issues) and [Discussions](https://github.com/PKU-Alignment/safe-rlhf/discussions) to …
-
### Required prerequisites
- [X] I have read the documentation .
- [X] I have searched the [Issue Tracker](https://github.com/PKU-Alignment/safe-rlhf/issues) and [Discussions](https://github.com/PKU-…
-
### Issue Type
Bug
### Tensorflow Version
Tensorflow-rocm v2.11.0-3797-gfe65ef3bbcf 2.11.0
### rocm Version
5.4.1
### Custom Code
Yes
### OS Platform and Distribution
Archli…
-
# Introduction
The recent advances in Large Language Models (LLMs) have enabled developers to utilize natural language in their applications with better quality and ability. As ChatGPT has shown, t…
-
## Description
Can't export a large collection, it maxes out memory usage, hangs, then crashes. Our database is primarily one large collection (approx 4.5million documents) and the `typesense-serve…
-
Trying to figure out how to use your weight conversion script. This is what I'm trying:
(1) Using GPTQ-for-LLaMa repo to convert LLaMA 65B model to .safetensors file:
$ python llama.py ~/models…
-
Over on @Laaza's text-gen-ui PR there has been discussion of inference performance.
In particular, people are concerned that recent versions of GPTQ-for-LLaMa have performed worse than the older ve…