-
### Willingness to contribute
No. I cannot contribute this feature at this time.
### Proposal Summary
When working with Mlflow Evaluation or AI agents, there are "hidden" system prompts that are no…
-
# URL
- https://arxiv.org/abs/2308.05342
# Affiliations
- Yuqing Wang, N/A
- Yun Zhao, N/A
# Abstract
- In Large Language Models (LLMs), there have been consistent advancements intask-specific p…
-
**Describe the bug**
When attempting to install the `guardrails/qa_relevance_llm_eval` validator from Guardrails Hub, the installation fails due to a missing dependency.
**To Reproduce**
Steps to…
-
Hi,
I mentioned this on the twitter post. I recently released a [preprint](https://arxiv.org/abs/2408.04114) which does very similar work where I also used a part of LLM-Aggrefact for my benchmark.
…
-
Currently the chat can use either a `langchain` model interface or `idefics` model interface. `langchain` model interface uses the selected LLM as foundation model for a Langchain Conversational or Co…
-
Dear authors,
Thank you very much for this amazing paper. I tried to reproduce your results in table 4 by using the finetuned weights of VQA-RAD to evaluate on the downstream dataset, but it seems…
-
-
Hi, thank you for the wonderful ollama project and the amazing community!
I am testing the Mixtral 3Bit Quantized model under a RTX400 with 20GB of VRAM. The model is 20GB of size and as you ca…
-
Thanks for adding VLM support to textgrad.
This doc describe how to use textgrad to do the autoprompt for [`gpt-4o`.](https://github.com/zou-group/textgrad/blob/main/examples/notebooks/Tutorial-Mul…
-
Hello, and congratulations on your work. Is it possible to include the output of the first stage (the corresponding knowledge sub-graphs) for the train and val set too?