-
I am not sure if this is open in another issue mentioned previously, or at least I haven't found it. I have an AMD Radeon graphics card with 8g VRAM. I should be able to run 7B gptq models okay, but I…
-
Now LLAMA 3.1 is out, but sadly it is not loadable with current text-generation-webui. I tried to update transformers lib which makes the model loadable, but I further get an error when trying to use …
-
I'm trying to probe the default case ,but it doesn't works.
(localGPT) ➜ localGPT git:(main) ✗ CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install llama-cpp-python==0.1.83 --no-cache-dir
…
-
I have installed h2ogpt on ubuntu22.04 as per procedure. But when I run the following command I get an error of missing **config.json** file. Please let me know how to overcome this error.
The comm…
-
-
I wanted to contribute to the Docs with neat code snippets for Python APIs (Like snippets found in PyTorch/Jax docs).
I followed the instruction in this [page](https://github.com/hwaseem04/mlx/tree/m…
-
### Your current environment
```
root@cy-ah85026:/vllm-workspace# ray status
======== Autoscaler status: 2024-08-02 02:04:32.248220 ========
Node status
------------------------------------------…
-
_when I run this command_
(h2ogpt) C:\Users\username\h2ogpt>python generate.py --base_model='llama' --prompt_type=llama2 --score_model=None --langchain_mode='UserData' --user_path=user_path
_I g…
-
Hello! I am using the prebuilt container [dustynv/llama_cpp](https://hub.docker.com/r/dustynv/llama_cpp/tags), which contains the built C++ executables inside `/opt/llama.cpp`.
However, trying to r…
-
I am using Llama() function for chatbot in terminal but when i set n_gpu_layers=-1 or any other number it doesn't engage in computation. In comparison when i set it on Lm Studio it works perfectly and…