jzhang38 / EasyContext

Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.
Apache License 2.0
529 stars 33 forks source link

how to infer the model? #27

Open laoda513 opened 1 month ago

laoda513 commented 1 month ago

Shall I also integrate with the same libs with training? Or just load the model as usual