-
Currently, we only have two option for Orchestrator that either QueryPipeline or AgentOrchestrator without custom prompt parameters.
Can we have something similar like prompt in FunctionCallingAgen…
-
用了8块a100-40g 运行llama3-70b-instruct 提示如下错误
[2024-04-22 10:52:15,696] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your s…
-
### 软件环境
```Markdown
- paddlepaddle-gpu: 0.0.0.post120
- paddlenlp: 2.8.0
```
### 重复问题
- [X] I have searched the existing issues
### 错误描述
```python
错误1:
Traceback (most recent ca…
-
I am running my code on AWS Sagemaker notebooks and I have machine with 4 GPUs. Whenever I set the tensor_parallel_size>1 it shows me the following error.
NFO 12-13 13:07:31 llm_engine.py:72] Initi…
-
**Describe the bug**
I use llama-2 7b, and when I start stage 2 in EE-Tuning, the bug occurs.
**To Reproduce**
here is `llama2_7B_1_exit_mlp_pt.sh` I modified:
``` bash
#!/bin/bash
PROJECT…
-
### Confirm this is a feature request for the Python library and not the underlying OpenAI API.
- [X] This is a feature request for the Python library
### Describe the feature or improvement you…
-
The growth in size of open-source models is outpacing the growth of memory capacity of Mac computers. The latest 70B version of Llama 3 is already pushing the limits of a fully loaded Mac Pro. The upc…
-
As a user, I would like to be informed about the summarization effectiveness of my chosen LLM endpoint.
I would like to be able to evaluate an endpoint against a known, tested framework, to evaluat…
-
I trained a Llama2-3B model using OpenRLHF and it trained fine. But when I shifted to the 7B version of the model, I had to shift to multiple nodes and encountered this error. After contacting the sup…
-
**Is your feature request related to a problem? Please describe.**
Our bot currently has very many tools, most of them using a different JDBC driver, different APIs (REST, GQL, gRPC). This comes wi…