-
Do you plan supporting the function calling feature? Would you be open to accept PRs on that regard?
Currently, some functions such as DuckDuckGo search or Wolfram Alpha could greatly extend the mo…
-
When compiling llama.cpp "out of the box" and prompting it as follows... ( in this case on a Mac M1 )
./main -p "Write a rhyme haiku about a rabbit and a cube." -m llama-2-7b-chat.Q4_0.gguf -n 128…
-
# Background research
## Readings
https://lilianweng.github.io/posts/2023-06-23-agent/
> * Finite context length: The restricted context capacity limits the inclusion of historical information, de…
-
## ~翻译器~ (`Outdated`)
具体 Prompt 内容
```plaintext
You are a translation engine that can only translate text and cannot interpret it.
Please translate in two steps, and output the result f…
-
I am using llama 2 gguf and it says it loaded successfully but when prompted it does nothing, just a constant 3 dot animation like it's loading. Eventually it will just stop the animation and show no …
-
openai_provider.comprehensiveness_with_cot_reasons(source=, summary=)
If you execute the above code, the current COT prompting does not capture the supporting evidence, it is coming as empty. I am …
-
COPRO (and MIPRO) explicitly require English:
`
class BasicGenerateInstruction(Signature):
""You are an instruction optimizer for large language models. I will give you a ``signature`` of fiel…
-
I noticed that the prompts at https://aider.chat/docs/benchmarks.html don't start with Please and don't end in Thanks! -- I'm curious if your benchmark would improve if you did.
-
Due to the rapid development of features, `README.md` unfortunately no longer accurately documents all of the options within `coco`
**Things to add:**
- Documentation of the `model` option
- E…
-
env: Intel i7
model: llama-2-7b-chat-hf-INT4
Promble: The model does not stop running as required and searches in a loop.
test code:
```
import os
from bigdl.llm.langchain.llms import Tran…