Closed hosseinfani closed 1 year ago
For the Benchmark Leaderboard, we aim to create a web page that displays the performance of each model based on the metrics provided in the test.pred.eval.mean.csv
file. The task involves the following steps:
Output
directory to locate the results of each model.I would greatly appreciate your ideas and suggestions, @hosseinfani, regarding this task. Additionally, as suggested by @MarcoKurepa, we could consider creating a separate repository specifically for this task, making it reusable for other projects as well.
I'm done with the preliminary design, right now I am using manually inputted dummy values (no crawler yet, that's next). My current TODO:
If there's anything you'd like me to add or change now, please do tell:)
Alright, I've done everything except the table highlights ('ll probably just bold things in to highlight them). It may look a bit funky with only 2 models, but when we put all 11 in it'll look fine. To prepare it you first need t convert all the csv files to json files (I just used a website, I can make a script for this too) and then run 'extract_model_data.json'. From there you should just be able to open index.html
with chrome.
@rezaBarzgar You can review it now, suggest changes. If it is to your satisfaction, the only next steps are to get the real data in and the real logo.
@MarcoKurepa I cloned the repo on my PC and executed it. However, I encountered a few issues that I believe are bugs in the code.
Specifically, there seems to be an error in reading the data. Furthermore, I noticed that there is a difference in the background color below the table. Currently, it appears to be white, whereas it should match the background color of the table and graph for a consistent visual experience.
Hello Reza, this is not a bug in the code but rather my instructions on how to run it. Since local directory files are needed, the index.html cannot simply be opened in chrome using the 'file://' protocol. You'll need to host a local http server using Node.
Here is a stackoverflow thread detailing how to do so, I just tested it and it worked for me:)
Apologies for the inconvenience!
Hello Reza, this is not a bug in the code but rather my instructions on how to run it. Since local directory files are needed, the index.html cannot simply be opened in chrome using the 'file://' protocol. You'll need to host a local http server using Node.
Here is a stackoverflow thread detailing how to do so, I just tested it and it worked for me:)
Apologies for the inconvenience!
Note: The whitespace at the bottom of the screen is due to the failure to load the data, without the data a graph cant be made.
@MarcoKurepa Thank you for your additional explanation. Sorry about that. :)
@MarcoKurepa, Would it be possible for you to add a README file to the repository? This README file would contain detailed instructions that anyone can follow to run the leaderboard page. Having clear and concise instructions would greatly facilitate the usage of the codebase and ensure a smooth experience for all users. Thank you!
Will do!
@rezaBarzgar I have updated the repository as specified.
https://github.com/MarcoKurepa/OpeNTF-Benchmark-Leaderboard/tree/main
@MarcoKurepa great! Thanks.
@MarcoKurepa Thanks for the update. We need the following changes:
Changelog:
TODO:
I will start to work in the TODO immediately, however I'd appreciate it if you'd go over my changes and make sure they're agreeable, or tell me if I'm missing something.
@rezaBarzgar @hosseinfani
@hosseinfani Can you explain in a bit more detail how you'd like to restructure the output folder for the dynamic crawler?
@MarcoKurepa I talked with @rezaBarzgar . You need to make a minor change in opentf driver code to add a subfolder based on the domain name to the output folder that user give. that is:
https://github.com/fani-lab/OpeNTF/blob/9a3c933c1d281205a5d0640046dea4d0d0fb6a7d/src/main.py#L159
to
output_path = f"{output}\{d_name}{os.path.split(datapath)[-1]}..."
@rezaBarzgar Alright I made all the necessary changes, you can review them on my fork! I am updating the README.md right now to bring it up-to-date.
@rezaBarzgar @hosseinfani Should we schedule another review of the benchmark leaderboard to ensure that it performs as intended? I'd like to wrap up this issue before school begins for me (next Tuesday) as after that I won't be available so often.
@MarcoKurepa, I'll check it again. Hopefully, if it works fine, I'll merge it with the main branch alongside the dataset retrieval feature.
@MarcoKurepa I reviewed the leaderboard. It works fine. I appreciate your help. @hosseinfani I think we can close this issue. We'll merge Marco's fork to the main branch when #191 is resolved.
@MarcoKurepa thank you for developing the leaderboard. @rezaBarzgar thank you also.
Building something like this: https://openbenchmark.github.io/BarsCTR/leaderboard/avazu_x1.html