-
## Motivation
WasmEdge is a lightweight inference runtime for AI and LLM applications. The [LlamaEdge project](https://github.com/LlamaEdge) has developed an [OpenAI-compatible API server](https://gi…
-
### System Info
## System Info
**LangChain Version:** 0.0.354
**Platform:** MacOS Sonoma 14.2.1
**Python Version:** 3.11.6
### Who can help?
@hwchase17
@agola11
### Information
- [X] Th…
-
**support for [vicuna-13b-delta-v1.1](https://huggingface.co/lmsys/vicuna-13b-delta-v1.1)
*NOTE: It's not listed in transformers supported models list but it does work with transformers*
**Reason …
-
How can we go about setting up 'context' for the LLM to respond based on? For example, I would like to feed it information about myself and get responses tailored to said information.
Along a simi…
-
https://brandolosaria.medium.com/setting-up-metaais-code-llama-34b-instruct-model-fc009aa937f6
https://github.com/go-skynet/LocalAI
-
🔍 **Deep Dive: "ReAct: Reasoning and Acting with Large Language Models"**
📝 **Summary:**
The authors introduce "ReAct", a method that synergizes reasoning and acting in large language models (LL…
-
What kind of data were used for training LLaMa 2?
-
[paper](https://arxiv.org/pdf/2310.03744.pdf)
see llava https://github.com/long8v/PTIR/issues/128#issue-1749571159 here
## TL;DR
- **I read this because.. :** aka LLaVA1.5 / ShareGPT4V에서 LL…
-
Related
- https://huyenchip.com/2023/04/11/llm-engineering.html
[Tweet thread](https://twitter.com/transitive_bs/status/1646778061160071168?s=46&t=aOEVGBVv9ICQLUYL4fQHlQ) - LLMs in Production host…
-
There are plenty of amazing solutions for using large language models (LLMs) to help with searching. For sake of compressing this request, I'll point out four kinds of them that I want in a modern sea…