Open arjunsuresh opened 1 year ago
@mrmhodak I have added some of the points which I believe are useful for new inference submitters. Can you please review?
@arjunsuresh: These are great points! For now, I am adding to the agenda tomorrow and we should definitely work on these.
Reference implementations are not practically usable- while it is not practical to support all the hardware ideally we should have an object oriented device where a new submitter should be able to extend the device class to add a new device
We at KRAI are busy adding new backends to our KILT codebase, which we released under a permissive open-source license after the v3.0 round.
The reference implementation should support scalability (CPU cores/ number of GPUs)
KILT has been used to produce some of the fastest and most energy efficient results in the history of MLPerf (with up to 18 Qualcomm Cloud AI 100 accelerators).
I believe KILT would satisfy at least these points and more with community contributions. If there is sufficient interest, we would consider making it an official MLCommons project, like Collective Knowledge.
Reference implementations are not practically usable- while it is not practical to support all the hardware ideally we should have an object oriented device where a new submitter should be able to extend the device class to add a new device
We at KRAI are busy adding new backends to our KILT codebase, which we released under a permissive open-source license after the v3.0 round.
The reference implementation should support scalability (CPU cores/ number of GPUs)
KILT has been used to produce some of the fastest and most energy efficient results in the history of MLPerf (with up to 18 Qualcomm Cloud AI 100 accelerators).
I believe KILT would satisfy at least these points and more with community contributions. If there is sufficient interest, we would consider making it an official MLCommons project, like Collective Knowledge.
Thank you @psyhtest. I think KILT will be useful particularly if it supports more hardware backends other than Qualcomm. In fact, we plan to integrate KILT with our Collective Knowledge workflows as an open MLPerf inference v3.1 challenge . Please feel free to join this effort!
By the way, @psyhtest, if it's of interest, we already have a project at MLCommons related to KILT (Thomas Zhu, a student from Oxford University, worked with our MLCommons Task Force on Automation and Reprodicibility to provide a first implementation):
It will be interesting to compare it with KILT and extend if needed.
Also, if I am correct, a few companies mentioned during inference v3.0 press briefing that they will release their own open-source and universal C++ implementation of MLPerf benchmarks for inference v3.1.
Our Task Force will be happy to help consolidate these efforts under existing MLCommons projects and integrate them with our MLCommons CK/CM workflow automation. Looking forward to collaboration!
I think KILT will be useful particularly if it supports more hardware backends other than Qualcomm.
We are planning to release more backends after the v3.1 round. Some necessary code refactoring is underway. I don't think it will be particularly productive to do anything until then, to be honest.
I think KILT will be useful particularly if it supports more hardware backends other than Qualcomm.
We are planning to release more backends after the v3.1 round. Some necessary code refactoring is underway. I don't think it will be particularly productive to do anything until then, to be honest.
Sure. Sounds good!
Submission Guidelines is added now.
Please feel free to add any more points which can help a new submitter.