Tharun24 / MACH

Extreme Classification in Log Memory via Count-Min Sketch
46 stars 16 forks source link

add preprocessing step to run.sh #1

Open markhuds opened 4 years ago

markhuds commented 4 years ago

Hey, I was just trying to reproduce your results for amazon_670k. I followed the steps in the run.sh file, but when I ran each process, I realized they were not using the gpu because the generators were not producing data. This was because I did not run the preprocessing script for the training data. Running preproc.py solved the issue. I'm not sure if I missed something obvious, but maybe its worth including in the run.sh script?

Tharun24 commented 4 years ago

Thank you Mark for pointing that out. Sure, I’m in the process of updating the package with a mlre streamlined version. I’ll write up an elaborate readme.md for others to use.

Regards, Tharun Medini

On Dec 12, 2019, at 7:47 PM, Mark Hudson notifications@github.com wrote:

 Hey, I was just trying to reproduce your results for amazon_670k. I followed the steps in the run.sh file, but when I ran each process, I realized they were not using the gpu because the generators were not producing data. This was because I did not run the preprocessing script for the training data. Running preproc.py solved the issue. I'm not sure if I missed something obvious, but maybe its worth including in the run.sh script?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or unsubscribe.