Open lephanthuymai opened 3 years ago
Assigning @zmerpez & @YikiSu as reviewers.
The package includes all the following forms of documentation:
setup.py
file or elsewhere.Readme requirements The package meets the readme requirements below:
The README should include, from top to bottom:
Reviewers are encouraged to submit suggestions (or pull requests) that will improve the usability of the package as a whole. Package structure should follow general community best-practices. In general please consider:
Note: Be sure to check this carefully, as JOSS's submission requirements and scope differ from pyOpenSci's in terms of what types of packages are accepted.
The package contains a paper.md
matching JOSS's requirements with:
Estimated hours spent reviewing: 3 hours
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. spacymoji 2.0.0 requires spacy<3.0.0,>=2.1.3, but you have spacy 3.0.5 which is incompatible. shap 0.37.0 requires slicer==0.0.3, but you have slicer 0.0.7 which is incompatible. Successfully installed catalogue-2.0.1 datascience-eda-0.1.6 numpy-1.19.5 pathy-0.4.0 pydantic-1.7.3 seaborn-0.11.1 sklearn-0.0 smart-open-3.0.0 spacy-3.0.5 spacy-legacy-3.0.1 srsly-2.4.0 textblob-0.15.3 thinc-8.0.2 typer-0.3.2 wasabi-0.8.2 wordcloud-1.8.1 yellowbrick-1.3.post1
This should be solved by changing the package version in the toml file.
usage
section on your README page, I was not able to load the data and run the function calls. I would highly recommend to include the link to the test file "/data/menu.csv"
to your README. It would also be greate if the following lines are added to your usage
section:from sklearn.pipeline import make_pipeline
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler
from sklearn.compose import make_column_transformer
I was not able to properly import your package as suggested in the usage
section, but it works after I tried:
import datascience_eda.datascience_eda as eda
. I googled and found out that it is probably because we didn't put the functions in the init.py
. I guess my group would have the same problem as well. But after updating this import line, I could access the functions in your package.
Since some of your functions generate a lot of plots in one function, when I tried to run it in my JupyterLab, it initially refused to show the plots. But it works after I put this in:
import matplotlib as plt
plt.rcParams.update({'figure.max_open_warning': 0})
It would be nice if you could include this in your usage
section.
Overall, it is a very cool package! Very well done, group datascience_eda. You made a great package to do some throughout eda to datasets and provides some insightful information. It will come in very handy for some of my projects! Good job!
The package includes all the following forms of documentation:
setup.py
file or elsewhere.Readme requirements The package meets the readme requirements below:
The README should include, from top to bottom:
Reviewers are encouraged to submit suggestions (or pull requests) that will improve the usability of the package as a whole. Package structure should follow general community best-practices. In general please consider:
Note: Be sure to check this carefully, as JOSS's submission requirements and scope differ from pyOpenSci's in terms of what types of packages are accepted.
The package contains a paper.md
matching JOSS's requirements with:
Estimated hours spent reviewing: 2 Hours
Install
The line provided in README worked well for me. I just need to import as Yiki suggested above by import datascience_eda.datascience_eda as eda
.
Usage
I think, your main functions does not need imputing and hence sklearn
functions at start. I have tried from palmerpenguins import load_penguins
and penguin_df = load_penguins()
, all functions worked fine on this without the imputation. That might decrease the number of functions in your package.
Note
In usage, after transforming the resulting df
would only have the numerical columns, so using other 3 functions on this dataframe would give error. You can either add "passthrough", or use the original data frame as parameter in your functions.
Overall, I liked the visualizations, I might prefer the output of explore_categorical_columns
to be something a bit easier to read, like a data frame. I have seen quite many functions, while expecting only 4, really good amount of work =) I wished, team to have better balance in the workload, given you have created such a big package.
Submitting Author: Mai Le (@lephanthuymai) Other Authors: Aditya Bhatraju(@adibns), Charles Suresh (@charlessuresh), Rahul Kuriyedath (@rahulkuriyedath) Package Name: datascience_eda One-Line Description of Package: This package includes functions handling various common tasks during the exploratory data analysis stage of a data science project. Repository Link: https://github.com/UBC-MDS/datascience_eda Version submitted: https://github.com/UBC-MDS/datascience_eda/releases/tag/milestone4 Editor: TBD
Reviewer 1: TBD
Reviewer 2: TBD
Archive: TBD
Version accepted: TBD
Description
This package includes functions assisting data scientists with common tasks during the exploratory data analysis stage of a data science project. Its functions will help the data scientist to do preliminary analysis on common column types like numeric columns, categorical columns, and text columns; it will also conduct several experimental clusterings on the dataset.
Scope
* Please fill out a pre-submission inquiry before submitting a data visualization package. For more info, see notes on categories of our guidebook.
datascience.eda package provides functions to automate most of the preliminary exploratory data analysis tasks, extract useful insights of the dataset and generate plots to visualize the findings.
The target audience of this package is data scientists, it will help to improve the efficiency of the EDA process.
There are various Python packages providing functions to be used in EDA, most of them focus on identifying the anomalies of numeric columns, the exact functionalities vary from ours, furthermore, there is no EDA-related package in our awareness that provides functions to handle text columns and data clustering.
@tag
the editor you contacted:Technical checks
For details about the pyOpenSci packaging requirements, see our packaging guide. Confirm each of the following by checking the box. This package:
Publication options
JOSS Checks
- [ ] The package has an **obvious research application** according to JOSS's definition in their [submission requirements][JossSubmissionRequirements]. Be aware that completing the pyOpenSci review process **does not** guarantee acceptance to JOSS. Be sure to read their submission requirements (linked above) if you are interested in submitting to JOSS. - [ ] The package is not a "minor utility" as defined by JOSS's [submission requirements][JossSubmissionRequirements]: "Minor ‘utility’ packages, including ‘thin’ API clients, are not acceptable." pyOpenSci welcomes these packages under "Data Retrieval", but JOSS has slightly different criteria. - [ ] The package contains a `paper.md` matching [JOSS's requirements][JossPaperRequirements] with a high-level description in the package root or in `inst/`. - [ ] The package is deposited in a long-term repository with the DOI: *Note: Do not submit your package separately to JOSS*Are you OK with Reviewers Submitting Issues and/or pull requests to your Repo Directly?
This option will allow reviewers to open smaller issues that can then be linked to PR's rather than submitting a more dense text based review. It will also allow you to demonstrate addressing the issue via PR links.
Code of conduct
P.S. *Have feedback/comments about our review process? Leave a comment here
Editor and Review Templates
Editor and review templates can be found here