We view Large Language Models as stochastic language layers in a network, where the learnable parameters are the natural language prompts at each layer. We stack two such layers, feeding the output of one layer to the next. We call the stacked architecture a Deep Language Network - DLN
MIT License
87
stars
12
forks
source link
Update README.md with instruction to set up vLLM #52
Update readme with instructions on how to set up self-hosted models using vLLM. Related issues: #50 #44