-
I think we can now try eating our own dog food, and let LLM write the code to solve [second most voted](https://github.com/ollama/ollama/issues?q=config+file+is%3Aopen+sort%3Areactions-%2B1-desc) issu…
-
We will create a sample showing how you can get around the output limit of LLMs through Amazon Bedrock through creative prompting. In our sample we will leverage Claude 3, but a similar strategy can b…
-
Type: feature-request
## Description:
When using complex scenarios, we need to teach the LLM how to generate the
response. This technique is a few-shot prompting, where a few messages are part
…
-
### Bug Description
For the basic general LLM/ChatGPT query, execution time took 17 seconds for the process, but for pure Python, it took only 4-7 seconds, which after debug into langflow code, see t…
-
- [ ] [Introduction to AI Agents - Cerebras Inference](https://inference-docs.cerebras.ai/agentbootcamp-section-1)
# Introduction to AI Agents - Cerebras Inference
## Overview
Cerebras Inference ho…
-
Hi!
Firstly, thanks for this useful paper!
I had a question regarding the prompt instruction text. An example of the prompt from your paper is shown below. I can see that you are providing the …
-
# URL
- https://arxiv.org/abs/2409.17539
# Affiliations
- Tongxuan Liu, N/A
- Wenjiang Xu, N/A
- Weizhe Huang, N/A
- Xingyu Wang, N/A
- Jiaxing Wang, N/A
- Hailong Yang, N/A
- Jing Li, N…
-
### Is there an existing issue for the same bug?
- [X] I have checked the existing issues.
### Describe the bug and reproduction steps
1. Use Docker command (with WSL) for setup for 0.14 with the p…
-
# URL
- https://arxiv.org/abs/2411.00331
# Authors
- Chumeng Jiang
- Jiayin Wang
- Weizhi Ma
- Charles L. A. Clarke
- Shuai Wang
- Chuhan Wu
- Min Zhang
# Abstract
- With the rapid dev…
-
Hi @JulianSampels,
Could you help @Tazio-Faha in adapting the codes for getting all the prompts from [results_prompts](https://github.com/sefeoglu/ODS_project_student/tree/master/results/result_pro…