I want to express my gratitude for your valuable work. Your comprehensive survey has greatly contributed to the Chinese NLP community.
I am writing to request the inclusion of our open-source models, MAP-Neo and CT-LLM, in this GitHub repository.
Here is a brief introduction to our models:
MAP-Neo: A fully open-source model where all data for pre-training and post-training are open, along with the code and public model weights. More details are available on our project website MAP-Neo.
CT-LLM (Chinese Tiny LLM): The first pre-training model focused on Chinese corpus, also open-sourcing the Chinese pre-training dataset, data processing pipeline, intermediate checkpoints, and pre-training code. More information can be found on our website Chinese Tiny LLM.
Dear lonePatient,
I hope this message finds you well.
I want to express my gratitude for your valuable work. Your comprehensive survey has greatly contributed to the Chinese NLP community.
I am writing to request the inclusion of our open-source models, MAP-Neo and CT-LLM, in this GitHub repository.
Here is a brief introduction to our models:
MAP-Neo: A fully open-source model where all data for pre-training and post-training are open, along with the code and public model weights. More details are available on our project website MAP-Neo. CT-LLM (Chinese Tiny LLM): The first pre-training model focused on Chinese corpus, also open-sourcing the Chinese pre-training dataset, data processing pipeline, intermediate checkpoints, and pre-training code. More information can be found on our website Chinese Tiny LLM.
Thank you for considering our request.
Best regards,
M-A-P Team