Open tensorflowt opened 1 year ago
You can interact with the 13B demo here: https://huggingface.co/spaces/chansung/Alpaca-LoRA-Serve
You can see examples of the training set here: https://github.com/tloen/alpaca-lora/blob/main/alpaca_data_cleaned.json And here: https://github.com/gururise/AlpacaDataCleaned
Thank you very much! If I want to train a model with a 13B, what is the minimum configuration for my GPU specifications?
Does the current model support multi-round dialogue capabilities? If you train such a model with your own data, are there any special requirements for your own data set? For example, multiple rounds of dialogue training samples? thanks!