-
thanks for your wonderful work!
i installed the env follows your step and the latest requriments.txt. but when the indexing stage,having some prombles,here are the logs:ImportError: cannot import nam…
-
Continual in-context generation should be the future as LLM context windows will tend to become infinite.
Crewai content team is present in alwrity to build upon. There dire need to simplify it for…
-
Would be nice to be able to have a conversation with the model with the responses back in voice.
For example "what is the capital of France?" and the model would reply with "Paris".
Talon file s…
-
I really like the concept of the "[skill recipe](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_teaching.ipynb)" in AutoGen, but I think it can be taken much further. One of the key…
-
I've used https://github.com/openvinotoolkit/openvino.genai/tree/master/llm_bench/python to test llama3's perf, which is a little low.
python benchmark.py -d GPU -m D:/AIGC/llama/models/Meta-Llama-3-…
-
With #83 we introduced learning from human feedback.
This process currently requires running two commands
```
foyle logs process
foyle learn
```
We should automate it so this process contin…
jlewi updated
4 months ago
-
### Context / Scenario
Hello, good morning/afternoon/evening and Happy Monday!
Initially, I must say that I am very impressed with this solution and keen to implement it as an internal service to …
-
### Motivation
Prefix caching is supported in many projects such as vllm, sglang and rtp-llm. Torch engine is going to support this feature in https://github.com/InternLM/lmdeploy/pull/1393. So we ra…
-
pip install e2b_code_interpreter
ERROR: Could not find a version that satisfies the requirement e2b_code_interpreter (from versions: none)
ERROR: No matching distribution found for e2b_code_interpre…
-
I can open the webpage at http://localhost:3001/
![F5605EFA-CE0C-4855-A52D-C222638CE039](https://github.com/OpenDevin/OpenDevin/assets/51695571/ae24fb7e-4ff5-4b4d-9ea1-d9224b2521a1)
But, I will st…