Check collection of public projects :gift:, where you can find multiple Kaggle competitions with code, experiments and outputs.
Here, at Neptune we enjoy participating in the Kaggle competitions. Toxic Comment Classification Challenge is especially interesting because it touches important issue of online harassment.
You need to be registered to neptune.ml to be able to use our predictions for your ensemble models.
start notebook
browse
buttonneptune_ensembling.ipynb
file from this repository. gcp-large
is the recommended one. Running the notebook as is got 0.986+ on the LB.
In this open source solution you will find references to the neptune.ml. It is free platform for community Users, which we use daily to keep track of our experiments. Please note that using neptune.ml is not necessary to proceed with this solution. You may run it as plain Python script :wink:.
We are contributing starter code that is easy to use and extend. We did it before with Cdiscount’s Image Classification Challenge and we believe that it is correct way to open data science to the wider community and encourage more people to participate in Challenges. This starter is ready-to-use end-to-end solution. Since all computations are organized in separate steps, it is also easy to extend. Check devbook.ipynb for more information about different pipelines.
Now we want to go one step further and invite you to participate in the development of this analysis pipeline. At the later stage of the competition (early February) we will invite top contributors to join our team on Kaggle.
You are welcome to extend this pipeline and contribute your own models or procedures. Please refer to the CONTRIBUTING for more details.
on the neptune site
neptune accound login
toxic
: Follow the link Projects
(top bar, left side), then click New project
button. This action will generate project-key TOX
, which is already listed in the neptune.yaml
.run setup commands
$ git clone https://github.com/neptune-ml/kaggle-toxic-starter.git
$ pip3 install neptune-cli
$ neptune login
start experiment
$ neptune send --environment keras-2.0-gpu-py3 --worker gcp-gpu-medium --config best_configs/fasttext_gru.yaml -- train_evaluate_predict_cv_pipeline --pipeline_name fasttext_gru --model_level first
This should get you to 0.9852 Happy Training :)
Refer to Neptune documentation and Getting started: Neptune Cloud for more.
Please refer to the Getting started: local instance for installation procedure.
Below end-to-end pipeline is visualized. You can run exactly this one!
We have also prepared something simpler to just get you started:
There are several ways to seek help: