Hello everyone! Recently, I read Vineet Suryan's blog on benchmarking machine learning frameworks. The concept of evaluating ML models across diverse hardware caught my attention. This resonates with my previous experience, where I executed vision-based deep learning models like YOLO on Avnet Ultra96 v2 utilizing Vitis-AI. I had a few ideas of my own as well like adding problem specific evaluation metrics to the platform.
I'm eager to contribute as an open-source collaborator to this project. While exploring, I stumbled upon https://github.com/makaveli10/MLBench/tree/documentation#mlbench, but unfortunately, I couldn't locate any specific issues to kickstart my involvement.
Hello everyone! Recently, I read Vineet Suryan's blog on benchmarking machine learning frameworks. The concept of evaluating ML models across diverse hardware caught my attention. This resonates with my previous experience, where I executed vision-based deep learning models like YOLO on Avnet Ultra96 v2 utilizing Vitis-AI. I had a few ideas of my own as well like adding problem specific evaluation metrics to the platform.
I'm eager to contribute as an open-source collaborator to this project. While exploring, I stumbled upon https://github.com/makaveli10/MLBench/tree/documentation#mlbench, but unfortunately, I couldn't locate any specific issues to kickstart my involvement.