I cleaned up a script proposed by @Lisennlp and managed to successfully convert XGen 7B model which is llama-compatible: https://huggingface.co/Salesforce/xgen-7b-8k-base
from HF to EasyLM, so that it could be fine-tuned with EasyLM. The converted model seems to work reasonably well in EasyLM (math-arXiv perplexity improves from 2K to 8K context length from 2.81 to 2.46).
Following up on the issue: https://github.com/young-geng/EasyLM/issues/7
I cleaned up a script proposed by @Lisennlp and managed to successfully convert XGen 7B model which is llama-compatible: https://huggingface.co/Salesforce/xgen-7b-8k-base from HF to EasyLM, so that it could be fine-tuned with EasyLM. The converted model seems to work reasonably well in EasyLM (math-arXiv perplexity improves from 2K to 8K context length from 2.81 to 2.46).
Credits to @Lisennlp for creating the script