-
Is it possible to modify the module to be able to use a local LLM instead of OpenAI? I believe a modification of this page: https://github.com/wickercar/foundry-ai-text-importer/blob/main/src/module/m…
-
I get the following error when I try to run the interrogator:
```
Loading CLIP Interrogator 0.6.0...
load checkpoint from /home/trahloc/s/ai/stable-diffusion-webui/models/BLIP/model_base_caption_…
-
近段时间 Apple silicon 平台的机器学习支持在开源社区的努力下快速发展,而苹果芯片的统一内存架构也让大模型的落地普及有了新的希望,因此本项目也推送了新版代码,增加对 macOS GPU 加速框架 MPS(Metal Performance Shaders)的支持。
不过 Apple silicon 诞生不到三年,其机器学习生态更是刚刚起步,一定存在许多问题。本 issue 用于记…
-
Hi all,
Is it possible to do inference on the aforementioned machines as we are facing so many issues in Inf2 with Falcon model?
Context:
We are facing issues while using Falcon/Falcoder on t…
-
I am having problems running this with Nvidia 4090. Have been running other models/setups (outside of this repo) with GPU without problem
sudo ./run.sh --model code-7b --with-cuda
[+] Running 1/…
-
- [ ] [Qwen-1.5-8x7B : r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/comments/1atw4ud/qwen158x7b/)
# TITLE: Qwen-1.5-8x7B : r/LocalLLaMA
**DESCRIPTION:** "Qwen-1.5-8x7B
New Model
Someone creat…
-
Check out the [newly built-in Superbooga extension](https://github.com/oobabooga/text-generation-webui/blob/main/docs/Extensions.md#built-in-extensions) and its parent.
I'm pretty sure the API does…
-
Hi!
Thank you for the paper! It is inspiring that you can compress weights to about 1 bit and the model still works better than random.
A practical sub-2-bit quantization algorithm would be a grea…
-
### Issue you'd like to raise.
I am trying to use GPT4ALL prompting using langchain from following link
https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html
I am…
-
so need to install both mambaforge and miniforge for Windows? Because when installing the mambaforge I see nothing about miniforge