-
Currently, this codebase uses OpenAI's ChatGPT. A great support to the codebase would be abstracting the ChatGPT language model requests into a class that can have any language model run the prompt. S…
-
### 🚀 The feature, motivation and pitch
Takes 1 hour+ on CI compared to others, which take
-
How can I use different language models from Hugging Face for knowledge distillation in this set up?
-
Hi, thanks for your great work. I was wondering the how many gpus are needed to training llava-next with 72b llm.
-
How can we integrate the use of language models to evaluate language model generations?
Currently, lm-eval evaluates language model generations with conventional metrics such as accuracy, bleu, etc…
-
# 1. Ollama
## 1. use Ollama CLI:
```
ollama serve
ollama run llama2:7b, llama3, llama3:70b, mistral, dophin-phi, phi, neural-chat, codellama, llama2:13b, llama2:70b
ollama list
ollama show
…
-
# Revolutionize Animation: Build a Digital Human with Large Language Models
A Step-by-Step Guide to Creating Your Next AI-Powered Avatar
[https://monadical.com/posts/build-a-digital-human-with-large…
-
- [ ] [[2005.14165] Language Models are Few-Shot Learners](https://arxiv.org/abs/2005.14165)
# [Language Models are Few-Shot Learners](https://arxiv.org/abs/2005.14165)
[2005.14165] Language Mod…
-
**Submitting author:** @hauselin (Hause Lin)
**Repository:** https://github.com/hauselin/ollama-r
**Branch with paper.md** (empty if default branch): joss
**Version:** v1.2.0.9000
**Editor:** @crverno…
-
- Paper name: Automatic Instruction Evolving for Large Language Models
- ArXiv Link: https://arxiv.org/pdf/2406.00770
To close this issue open a PR with a paper report using the provided [report…