tloen / alpaca-lora

Instruct-tune LLaMA on consumer hardware
Apache License 2.0
18.57k stars 2.21k forks source link

Does the current model support multi-round dialogue capabilities? #177

Open tensorflowt opened 1 year ago

tensorflowt commented 1 year ago

Does the current model support multi-round dialogue capabilities? If you train such a model with your own data, are there any special requirements for your own data set? For example, multiple rounds of dialogue training samples? thanks!

claysauruswrecks commented 1 year ago

You can interact with the 13B demo here: https://huggingface.co/spaces/chansung/Alpaca-LoRA-Serve

You can see examples of the training set here: https://github.com/tloen/alpaca-lora/blob/main/alpaca_data_cleaned.json And here: https://github.com/gururise/AlpacaDataCleaned

tensorflowt commented 1 year ago

Thank you very much! If I want to train a model with a 13B, what is the minimum configuration for my GPU specifications?