Closed fatihsen20 closed 1 year ago
Check out this pull request on
See visual diffs & provide feedback on Jupyter Notebooks.
Powered by ReviewNB
PS: Don't worry about the linter above, we can address that (happy to take care of that) at the very end.
Patch coverage: 90.59
% and project coverage change: +0.13
:tada:
Comparison is base (
11b61b9
) 77.33% compared to head (21e038d
) 77.46%.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
Thanks a lot for the update! I will do my best to review it this weekend!
I always overlook the isort rules. I am sorry π₯²
No worries about that! The style checks are really not a big deal, we can address those at the very end upon merging.
Hello, will my pull request be merge? I am very excited about this π @rasbt
This is a fantastic PR! And sorry, will merge soon! Haven't had a chance to go over the docs yet! I have a question here:
You mentioned
on a large dataset.
but did you mean
on a small dataset.
I.e., the dataset from the example above? No worries about fixing it. I did some other tweaks in the docs and will update soon (pls don't modify the Nb in the meantime due to merge conflicts, Jupyter NB's are still a bit tricky on GitHub π )
This is a fantastic PR! And sorry, will merge soon! Haven't had a chance to go over the docs yet! I have a question here:
You mentioned
on a large dataset.
but did you mean
on a small dataset.
I.e., the dataset from the example above? No worries about fixing it. I did some other tweaks in the docs and will update soon (pls don't modify the Nb in the meantime due to merge conflicts, Jupyter NB's are still a bit tricky on GitHub π )
Sorry, I probably missed it while preparing the document. Yes, there will be a small dataset in the example there. Thank you so much for your hard work π
Oh I thought it was based on the small example at the top. So you ran the benchmark on a small hands-on dataset if I understand it correctly? I think we can leave things as is unless it's not too large and we can add it to the doc repo. What do you think?
Oh I thought it was based on the small example at the top. So you ran the benchmark on a small hands-on dataset if I understand it correctly? I think we can leave things as is unless it's not too large and we can add it to the doc repo. What do you think?
Yes it was based on the small example above. Since we are testing with small dataset on Doc, it may remain small. I don't think it will be a big problem if we add it this way.
Ok perfect, then I'd say it's fine as is because the example is already in the notebook.
Should be good to merge then, correct?
Ok perfect, then I'd say it's fine as is because the example is already in the notebook.
Should be good to merge then, correct?
Okey sir, please π
Merged it @fatihsen20 . Thanks again for this awesome PR! I am hoping to make a new release version on the upcoming weekend
Code of Conduct
New feature.
Description
In order to enrich the library and to compare the speed and memory costs of certain algorithms, I added the hmine algorithm to the library.
Related issues or pull requests
None
Pull Request Checklist
./docs/sources/CHANGELOG.md
file (if applicable)./mlxtend/*/tests
directories (if applicable)mlxtend/docs/sources/
(if applicable)PYTHONPATH='.' pytest ./mlxtend -sv
and make sure that all unit tests pass (for small modifications, it might be sufficient to only run the specific test file, e.g.,PYTHONPATH='.' pytest ./mlxtend/classifier/tests/test_stacking_cv_classifier.py -sv
)flake8 ./mlxtend