-
In [this comment](https://github.com/mpiannucci/gribberish/issues/41#issuecomment-2405209996) @mpiannucci said:
> I... was going to codegen [a Rust implementation of the GRIB code tables] but never…
-
When i test public DCLM-7B(https://huggingface.co/apple/DCLM-7B) model on triviaqa small subset, the metrics is so low.
Eval metrics/triviaqa_sm_sub/0-shot/InContextLearningGenerationExactMatchAccu…
-
### Your current environment
Hi,
While serving VLLM and generating a text let's say, how could i trigger a `curl` to stop the current generation ?
Currently using llama 3.1 70b.
Thanks
### Ho…
-
I have developed a generation FMU tool that creates FMUs for Linux on co-simulation mode.
I am wondering where the extracted files are located when PyFMI loads an FMU.
I am asking this question b…
-
***Under Construction***
The Answer Engine, released in version 0.13, provides a Q&A interface for Tabby's users to interact with the LLM, optionally within the context of a connected repository. T…
-
### System Info
- transformers version: 4.43.0.dev0
- Platform: Linux-4.19.91-014.15-kangaroo.alios7.x86_64-x86_64-with-glibc2.35
- Python version: 3.10.14
- Huggingface_hub version: 0.25.0
- S…
-
-
- [x] I have checked the [documentation](https://docs.ragas.io/) and related resources and couldn't resolve my bug.
**Your Question**
I would like to use Answer Relevance for RAG evaluation in Jap…
-
Hello,
thank you for thegreat work.
I have been trying to expermint with the model and see how it works.
My question is:
Can I use llama3.2 vision to cover cases where there is no image?
I…
-
#### Reason/Context
Please try answering few of those questions
- Why we need this improvement?
**puppeteer is used only for PDF generation while being quite heavy (~200Mb)**
- How will this chan…