DeepSE / AILogs

16 stars 1 forks source link

AI LOG from 2020-07-22 #40

Open hunkim opened 4 years ago

hunkim commented 4 years ago

article

text augmentation

Data Augmentation in NLP: Best Practices From a Kaggle Master There are many tasks in NLP from text classification to question answering but whatever you do the amount of data you have to train your model impacts the model performance heavily. What can you do to make your dataset larger? Simple option -> Get more data :). But acquiring and labeling additional observations can be […] https://neptune.ai/blog/data-augmentation-nlp?utm_source=facebook&utm_medium=post-in-group&utm_campaign=blog-data-augmentation-nlp&utm_content=

강화학습 생성?

DeepMind’s AI automatically generates reinforcement learning algorithms Researchers at DeepMind propose a new technique that automatically discovers a reinforcement learning algorithm from scratch. https://venturebeat.com/2020/07/20/deepminds-ai-automatically-generates-reinforcement-learning-algorithms/

acl2020 리뷰

핑퐁팀과 함께하는 ACL 2020 리뷰 ACL 2020에 발표되었던 논문들을 추려서 리뷰해보았습니다. https://blog.pingpong.us/acl2020-review/

좋아 보입니다. 매일 매일 이야기

일간 애자일#522 불평을 끝내기 위해서 데일리미팅에서 해야하는 2가지 등 매일 애자일, 린, 조직문화, 협업, 리더십, 자기계발 등과 관련된 새로운 소식을 공유드립니다.소감, 동의, 반론 등을 댓글로 남겨주세요. 활발한 소통을 기대합니다.지난 기사는 [여기]에서 보실 수 있습니다. 불평을 끝내기 위해서 데일리미팅에서 해야하는 2가지 여러분이 해야 할 첫 번째 것은, 매일의 데일리 미팅을 마칠 때 오늘 데일리 미팅이 얼마나 … https://agile-od.net/2020/07/22/%EC%9D%BC%EA%B0%84-%EC%95%A0%EC%9E%90%EC%9D%BC5227-22-%EB%B6%88%ED%8F%89%EC%9D%84-%EB%81%9D%EB%82%B4%EA%B8%B0-%EC%9C%84%ED%95%B4%EC%84%9C-%EB%8D%B0%EC%9D%BC%EB%A6%AC%EB%AF%B8%ED%8C%85%EC%97%90/?fbclid=IwAR3Yzoxfh3MiSsnFxWiRb-jlDQ5DKO0nh9VzWC470K5jOJhk7WYb1PEiKzs

예측

M5 Forecasting - Accuracy | Kaggle https://www.kaggle.com/c/m5-forecasting-accuracy

모두가 기다리고 있는 멋진.

GPT-3 : A deep-learning model for natural-language A step towards future https://medium.com/@anas_ali/gpt-3-a-deep-learning-model-for-natural-language-406afde92733

tune

Using PyTorch Lightning with Tune — Ray 0.9.0.dev0 documentation PyTorch Lightning is a framework which brings structure into training PyTorch models. It aims to avoid boilerplate code, so you don’t have to write the same training loops all over again when building a new model. https://docs.ray.io/en/master/tune/tutorials/tune-pytorch-lightning.html#