cocktailpeanut / dalai

The simplest way to run LLaMA on your local machine
https://cocktailpeanut.github.io/dalai
13.09k stars 1.42k forks source link

summarization #216

Open mishav78 opened 1 year ago

mishav78 commented 1 year ago

does this model summarize well? Especially abstract summarization, not extract summarization.

RoBorg commented 1 year ago

This is not a model, it is a front end to allow you to run different models (currently llama and alpaca, each one of which has different sizes). As for how good it is, that depends on which model you use and what you're comparing to. In general, the larger the model (e.g. using the llama 65B version) the better the results you can expect.