amanjeetsahu / Natural-Language-Processing-Specialization

This repo contains my coursework, assignments, and Slides for Natural Language Processing Specialization by deeplearning.ai on Coursera
https://www.coursera.org/specializations/natural-language-processing
776 stars 672 forks source link
course course-assignment course-materials coursera coursera-specialization natural-language-generation natural-language-processing natural-language-understanding nlp nlp-machine-learning slides specialization

Natural Language Processing Specialization

Natural Language Processing (NLP) uses algorithms to understand and manipulate human language. This technology is one of the most broadly applied areas of machine learning. As AI continues to expand, so will the demand for professionals skilled at building models that analyze speech and language, uncover contextual patterns, and produce insights from text and audio. This Specialization will equip you with the state-of-the-art deep learning techniques needed to build cutting-edge NLP systems. By the end of this Specialization, you will be ready to design NLP applications that perform question-answering and sentiment analysis, create tools to translate languages and summarize text, and even build chatbots.

This Specialization is for students of machine learning or artificial intelligence as well as software engineers looking for a deeper understanding of how NLP models work and how to apply them. Learners should have a working knowledge of machine learning, intermediate Python including experience with a deep learning framework (e.g., TensorFlow, Keras), as well as proficiency in calculus, linear algebra, and statistics. If you would like to brush up on these skills, we recommend the Deep Learning Specialization, offered by deeplearning.ai and taught by Andrew Ng.

This Specialization is designed and taught by two experts in NLP, machine learning, and deep learning. Younes Bensouda Mourri is an Instructor of AI at Stanford University who also helped build the Deep Learning Specialization. Łukasz Kaiser is a Staff Research Scientist at Google Brain and the co-author of Tensorflow, the Tensor2Tensor and Trax libraries, and the Transformer paper.

Course 1: Classification and Vector Spaces in NLP

This is the first course of the Natural Language Processing Specialization.

Week 1: Logistic Regression for Sentiment Analysis of Tweets

Week 2: Naïve Bayes for Sentiment Analysis of Tweets

Week 3: Vector Space Models

Week 4: Word Embeddings and Locality Sensitive Hashing for Machine Translation

Course 2: Probabilistic Models in NLP

This is the second course of the Natural Language Processing Specialization.

Week 1: Auto-correct using Minimum Edit Distance

Week 2: Part-of-Speech (POS) Tagging

Week 3: N-gram Language Models

Week 4: Word2Vec and Stochastic Gradient Descent

Course 3: Sequence Models in NLP

This is the third course in the Natural Language Processing Specialization.

Week 1: Sentiment with Neural Nets

Week 2: Language Generation Models

Week 3: Named Entity Recognition (NER)

Week 4: Siamese Networks

Course 4: Attention Models in NLP

This is the fourth course in the Natural Language Processing Specialization.

Week 1: Neural Machine Translation with Attention

Week 2: Summarization with Transformer Models

Week 3: Question-Answering with Transformer Models

Week 4: Chatbots with a Reformer Model

Specialization Completion Certificate

Certificate