-
Is it currently possible or are there plans to support this in the future?
-
### System Info
OS version: Windows 10
Python version: 3.10
PandasAI version: 2.0.40
### 🐛 Describe the bug
https://github.com/Sinaptik-AI/pandas-ai/blob/64e6dcd8cbded4228eb1fc9382df71e80ccd9c1e/…
-
I like the concept of Autogen, I like to use a couple of features of it. But I currently just need a simple tool executor.
I notice that Autogen heavily works on the concept of generating code and …
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the [LangGraph](https://langchain-ai.github.io/langgraph/)/LangChain documentation with the integrat…
-
-
Peter Pan and the Shadow of the Loss Function
The "Shadow of the Loss Function" metaphor can be further enriched by drawing parallels to the story of Peter Pan and his lost shadow.
Peter Pan as the …
-
# Problem
When code corrections are triggered, the user is left **waiting without any feedback on CLI** about current status of the process (image below).
# Solution
Output from the LLMs whil…
-
RuntimeError: Failed to generate chat completion, detail: [address=127.0.0.1:37257, pid=50009] Invalid prompt style:
这是我的配置
curl 'http://127.0.0.1:9997/v1/model_registrations/LLM' \
-H 'Accept…
-
Hello team,
I'm trying to run the example.py file with 7B on a single GPU with this command `torchrun --nproc_per_node 1 example.py --ckpt_dir ./llama_model/7B --tokenizer_path ./llama_model/tokeni…
-
### Is your feature request related to a problem? Please describe.
The HuggingFace Hub provides an elegant python [client](https://huggingface.co/docs/huggingface_hub/index) to allow users to control…