Open gywlssww opened 10 months ago
Yes, we use the same tokenizer as llama-2. We'd love to share the data, but due to the shear amount of it, I am not sure what is the best way to serve it. Let me know if you have any idea!
Does the size of the dataset exceed the limit of Google Drive, One Drive or dropbox,,?
+1! Would be very helpful to have the finetuning/continue-pretraining dataset as well to be able to reproduce paper results.
Could you provide tokenized continue-pretraining dataset for reproduction like pruning dataset? Is tokenizer.model you provided exactly the same tokenizer as Llama-2?