LittleLittleCloud / Torchsharp-phi

Torchsharp port of phi-series model
7 stars 2 forks source link

Compare the GPU memory requirements between TorchSharp-phi and Microsoft.ML.OnnxRuntimeGenAI #13

Open GeorgeS2019 opened 1 week ago

LittleLittleCloud commented 1 week ago

@GeorgeS2019 thanks for the feedback. We are working on migration this project to Microsoft.ML.GenAI.Phi and we will add the GPU memory comparsion between this package and phi model in onnxruntime package afterwards

GeorgeS2019 commented 1 week ago

I have problems with using GPU for onnxruntime, onnxruntime-training and GenAI

The requirements are not clear and the examples provided not updated

All examples focus only on cpu, making using GPU unclear

LittleLittleCloud commented 1 week ago

For the GenAI, are you referring to this sample? If that case would you mind sharing the error log you have, I'm happy to help out.