-
### System Info
- `transformers` version: 4.36.0
- `autoawq` version: 0.1.7
- Platform: Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.26
- Python version: 3.10.13
- Huggingface_hub versi…
-
I am looking for the api server backend that can run codellama model, which can then bt integrated into VSCode plugins to offer code assisting service.
What is the supporting component to fulfill t…
-
Hey, made a quick Sanky Diagram for Tulu v2 and thought it might be interesting to share it with you guys. The reason I'm doing this is because I noticed FLAN is repeatedly used by different datasets,…
nuoma updated
8 months ago
-
Is there any way I can find the maximum context length of a local LLM?
-
First, I want to express my gratitude about this project. I think TinyLlama has a lot of potential and we're just starting to see it. Cudos!
I'm pretty new to this exciting field and this is the fi…
-
### System Info
when using longchain to initialize MyGPT4ALL i encounter this error
```
from custome_llm import MyGPT4ALL
print("Starting")
# Create an instance of MyGPT4ALL
my_gpt_instance…
-
### Bug description
CodeLlama-7B instruct download stuck at 93.4%. I am able to use the model though. This doesn't happen with other models like Mistral-7B-OpenOrca.
### Steps to reproduce
Download…
-
### Describe the bug
cannot load autoawq Model text-generation-webui1.7 how i fix this
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Reproduction
when i t…
-
### Issue you'd like to raise.
when i try and run this script
```
class WebsiteSummary:
def _SummorizeWebsite(objective, websiteContent):
my_gpt_instance = MyGPT4ALL()
…
-
### Description
When I run docker-compose-postgres.yml with some modifications for adding missing timezone var, set the various repos for nextjs, agixt, and streamlit to use main instead of latest,…