Open karlnapf opened 6 years ago
I would say a good place to start is to check out the benchmarking system, run it on a small dataset, and then produce an SQL query on the results that gives the desired information. After that we can figure out the right way to put this together into something that can be merged.
If somebody likes to work on the latests results, we can also provide an SQL dump.
(18:34:46) zoq: Agreed, a simple preprocessing script sounds like a good idea, at the end, it's just a simple SQL query. (18:34:58) zoq: We could provide some simple scripts apart from the HTML/JS interface for the most common tasks. (18:35:12) zoq: I guess you could also write a simple Jupiter notebook? I think you could provide some input on that?
Hi,
I've figured out a way to get the results through the mysql_wrapper.php.
curl -X POST \
http://www.mlpack.org/php/mysql_wrapper.php \
-H 'Cache-Control: no-cache' \
-H 'Content-Type: application/x-www-form-urlencoded' \
-H 'Postman-Token: 175a469b-31bd-4921-bd23-170d8b9e171a' \
-d 'request=<QUERY>'
And <QUERY>
is
SELECT
*
FROM (
SELECT
libraries.name as lib,
methods.name as name,
datasets.name as dataset,
results.time as time,
results.var as var,
libraries.id,
datasets.id as did,
libraries.id as lid,
results.build_id as bid,
datasets.instances as di,
datasets.attributes as da,
datasets.size as ds
FROM results, datasets, methods, libraries
WHERE
results.dataset_id = datasets.id AND
results.method_id = methods.id AND
methods.parameters = '' AND
libraries.id = results.libary_id AND
libraries.name = 'shogun' AND
results.time <= 0 # this can be remove if we want to get all results.
ORDER BY bid DESC
) tmp;
Returned JSON:
[
{
"lib": "shogun",
"name": "LinearRegression",
"dataset": "diabetes",
"time": "0",
"var": "0",
"id": "6",
"did": "8",
"lid": "6",
"bid": "33",
"di": "442",
"da": "10",
"ds": "0"
},
{
"lib": "shogun",
"name": "LinearRegression",
"dataset": "cosExp",
"time": "-2",
"var": "0",
"id": "6",
"did": "9",
"lid": "6",
"bid": "33",
"di": "200",
"da": "800",
"ds": "1"
},
...
]
Nice solution, thanks for the input.
hi @zoq,
I'm a bit concerned about the API though. With this interface, one might directly send malicious queries to the DB.
I personally think we should find a better way to extract the result as well as making the API more restrict.
One way might be to generate benchmark results offline and write the results as JSON files. The frontend will just read those JSON files.
Actually, I was surprised you could send a query; only a specific IP should be able to do that. Do you think, that could be sufficient?
I was surprised too!
Can you elaborate a bit about your approach? If I understood correctly, you would allow only certain IPs to send the POST request to the server. If that's the case, the benchmark page will be functional on for those IPs, right?
Right, only the build/webserver would be able to send the POST, I think this is already the case, but since a user is able to run the php script the IP is correct, so I guess we could adapt the script.
Hi @zoq & @karlnapf,
Given the command I provided previously and current coverage in https://github.com/shogun-toolbox/shogun/issues/4046#issuecomment-461455849, I think we basically have everything we need for this issue.
What else should we do for this issue?
But some shogun algos are uncovered, so how can we know how it performs?
It would be interesting to see in which case shogun is leading or is behind. The second case, could be a good starting point for further analysis.
I've also created an issue at mlpack's benchmarks (https://github.com/mlpack/benchmarks/issues/133). It seems Shogun has an DTC algorithm but the benchmark report doesn't have the the result.
@zoq do you have any idea why that happens?
This task is to find a systematic way to figure out the cases for which a library in the automated benchmarking system performs badly
Contact @zoq and @rcurtin who have ideas how to do that.