-
```
[+] Running 2/0
✔ Container pdfgpt-pdf-gpt-1 Created …
-
Love the progress so far!
Will you guys test and publish the full swe-bench and the 25% subset test besides just the swe-bench lite?
On auto-code-rover repo, it says 22% on swe-bench lite and 16% on…
-
First of all, thank you so much for building Perplexica! It's superhelpful to be able to use something like Perplexity with Ollama.
I have a feature request: It would be great if Perplexica allow t…
huytd updated
5 months ago
-
Inspired by this [load balancing](https://github.com/Portkey-AI/gateway) idea.
Load balance allow across multiple models, providers, and keys, avoid toke limitation.
-
What I understand is Typechat can be an alternative to lang chain library but in whole documentation I only see this being used for OpenAI's GPT Model only. So, is there any option to use this with an…
-
Hi, thanks for open-sourcing your work! I've been trying to reproduce the results from your paper, but the accuracy I got is pretty low. (For game of 24, it's only 3%). Could you point out any potenti…
-
When I try to run `main.py`, I get the following output:
```bash
~/hlb-gpt$ python main.py
downloading data and tokenizing (1-2 min)
Traceback (most recent call last):
File "/home/ubuntu/hlb…
snimu updated
8 months ago
-
## Explanation for Implementation
Each agent corresponds to one internal processing layer:
- **Planning Agent**: Identifies relevant files for editing and creates a detailed task list with exact Ins…
-
- Here's the summary of consulting a LLM specialist:
---
- We have an initial thought in #74 as follows:
![image](https://github.com/user-attachments/assets/265a3d7d-0454-4e7b-9c99-a0dd9f9ecf7c…
-
For my use case, I have different models running on different servers which all replicate the OpenAI completions endpoint. However, from what I can see, it is currently not possible to use both the de…