-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [ ] I am running the latest code. Development is very rapid so there are no tagged versions as of…
-
### What is the issue?
I figure this is a downstream packaging issue, but this could possibly do with some upstream help. Arch is at 0.3.12, and has recently attempted to package against their ROCm 6…
-
This is going to collect missing spaces a fter a period as discussed in https://github.com/petergtz/alexa-wikipedia/issues/37.
-
Provide some guarantees, guidelines and a process keeping out of tree users operational while the zephyr project code advances with new technologies, code cleanups and other major code and API changes…
-
I am pondering how to think about packet rates in the 100G era. How should we be designing and optimizing our software?
Consider these potential performance targets:
- A: 1x100G @ 64 Mpps.
- B: 1x100…
-
**Issue type:**
[X] question
[ ] bug report
[ ] feature request
[ ] documentation issue
**Database system/driver:**
[ ] `cordova`
[ ] `mongodb`
[ ] `mssql`
[ ] `mysql` / `mariadb`
[ ] …
-
Can llama.cpp support new multimodal model [IDEFICS](https://huggingface.co/HuggingFaceM4/idefics-9b-instruct)? IDEFICS based on llama-1 7b and llama-1 65b
-
Hi, Thanks for the great repo!
Would you consider including our recent MoE-LLaVA into the project?
Title: "MoE-LLaVA: Mixture-of-Experts for Large Vision-Language Models"
Paper: https://arxiv.org…
-
#108 is going to be an important place to start.
Twitter is the only context where I do not currently see a need for increased emphasis on self-curation at this time considering how that all appearan…
-
Hi,
Thanks for the efforts in building the MME benchmark!
I request to add our MoE-LLaVA-2.7B×4 to the MME benchmark.
Title: "MoE-LLaVA: Mixture of Experts for Large Vision-Language Models"
…