COMP 790.139 (Spr2019): Advanced Topics in NLP: Recent Progress in Different Learning Paradigms

Instructor: Mohit Bansal
Units: 3
Office: FB-246
Lectures: Fridays 3:00pm-5:30pm, Room FB-008
Office Hours: Fridays 2:00pm-3:00pm (by appointment) (FB-246)
Course Webpage: http://www.cs.unc.edu/~mbansal/teaching/advanced-nlp-seminar-spring19.html
Course Email: nlpcomp790unc -at- gmail.com


Syllabus Topics

This course will be an advanced topic seminar class on natural language processing, focusing on the recent advances via diverse learning paradigms such as:

This will be a research-oriented grad-level seminar course, where we will read lots of interesting research papers, brainstorm about ideas on latest research topics, and code and write up fun and novel projects!
Please email me or drop by my office if you have any questions!


Prerequisites

Since this is an advanced topics NLP class, the student is expected to have equivalent experience to Dr. Bansal's fall 2016 or fall 2017 regular NLP class.


Grading (tentative)

Grading will consist of:

Details in first class intro lecture slides. There will not be any exams. All submissions should be emailed to: nlpcomp790unc@gmail.com


Lateness Policy

Students are allowed 3 free late days for assignments over the semester. After that, late assignments will be accepted with a 20% reduction in value per day late.


Collaboration Policy

Paper summaries have to be written and submitted individually. Paper presentations have to be done individually. Projects can be individual (e.g., if it relates to your current research) or in pairs (but with proportional work, and with clearly outlined contributions from each team member).


Reference Books




Tentative Schedule

DateTopic Readings Discussion LeadersTodo's
Jan 11 Intro to Class -- Mohit -
Jan 18 Multi-Task Learning 1 (1) Multi-task Sequence to Sequence Learning;
(2) A Joint Many-Task Model: Growing a Neural Network for Multiple NLP Tasks;
(3) Multi-Task Video Captioning with Video and Entailment Generation;
(4) One Model To Learn Them All;
(5) The Natural Language Decathlon: Multitask Learning as Question Answering;
Darryl, Yichen, Mohit -
Jan 25 Multi-Task Learning 2 (1) Deep multi-task learning with low level tasks supervised at lower layers;
(2) Soft, Layer-Specific Multi-Task Summarization with Entailment and Question Generation;
(3) When is multitask learning effective? Semantic sequence prediction under varying data conditions;
(4) Latent Multi-task Architecture Learning;
(5) Dynamic Multi-Level Multi-Task Learning for Sentence Simplification;
Xiang, Shiyue, Han, Mohit -
Feb 1 Reinforcement Learning 1 (1) Sequence Level Training with Recurrent Neural Networks;
(2) A Deep Reinforced Model for Abstractive Summarization;
(3) Multi-Reward Reinforced Summarization with Saliency and Entailment;
(4) Fast Abstractive Summarization with Reinforce-Selected Sentence Rewriting;
(5) Deep Reinforcement Learning for Dialogue Generation;
Tianxiang, Yubo, Ram, Mohit -
Feb 8 Reinforcement Learning 2 and
Architecture Learning
(1) Self-critical Sequence Training for Image Captioning;
(2) Reward Augmented Maximum Likelihood for Neural Structured Prediction;
(3) Learning to Reason: End-to-End Module Networks for Visual Question Answering;
(4) Neural Architecture Search with Reinforcement Learning;
(5) Efficient Neural Architecture Search via Parameter Sharing;
Yang, Larry, Tong, Mohit -
Feb 15 Unsupervised Learning, Pretraining, and Fine-Tuning (1) Phrase-Based & Neural Unsupervised Machine Translation;
(2) When and Why Are Pre-Trained Word Embeddings Useful for Neural Machine Translation?;
(3) Deep contextualized word representations (ELMo);
(4) BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding;
(5) Improving Language Understanding by Generative Pre-Training;
Yichen, Darryl, Mohit -
Feb 22 Transfer Learning and Domain-Adaptation 2 (1) Universal Language Model Fine-tuning for Text Classification;
(2) Strong Baselines for Neural Semi-Supervised Learning under Domain Shift;
(3) Semi-Supervised Sequence Modeling with Cross-View Training;
(4) Unsupervised Domain Adaptation by Backpropagation;
(5) Learning Robust Representations by Projecting Superficial Statistics Out;
Shiyue, Xiang, Mohit -
Mar 1 Meta-Learning (1) Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks;
(2) Meta-Learning for Low-Resource Neural Machine Translation;
(3) Natural Language to Structured Query Generation via Meta-Learning;
(4) Learning to Reweight Examples for Robust Deep Learning;
(5) Learning Unsupervised Learning Rules;
Yang, Tianxiang, Mohit -
Mar 8 Midterm Project Proposal Presentations -- All -
Mar 15 Spring Break: No Class -- -- -
Mar 22 Multi-View/Correlational Representation Learning (1) Improving Vector Space Word Representations Using Multilingual Correlation;
(2) Deep Multilingual Correlation for Improved Word Embeddings;
(3) Multi-View Learning of Word Embeddings via CCA;
(4) Multi-view Recurrent Neural Acoustic Word Embeddings;
Yubo, Larry, Mohit Midterm Write-up Due Mar24
Mar 29 Adversarial Learning, GANs, Data Augmentation 1 (1) Adversarial Examples for Evaluating Reading Comprehension Systems;
(2) Adversarial Over-Sensitivity and Over-Stability Strategies for Dialogue Models;
(3) Robust Machine Comprehension Models via Adversarial Training;
(4) Adversarial Learning for Neural Dialogue Generation;
(5) Self-Training for Jointly Learning to Ask and Answer Questions;
(6) Speaker-Follower Models for Vision-and-Language Navigation;
Shiyue, Yichen, Mohit -
Apr 5 Adversarial Learning, GANs, Data Augmentation 2 (1) SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient ;
(2) Improved Training of Wasserstein GANs ;
(3) Language Generation with Recurrent Generative Adversarial Networks without Pre-training ;
(4) Adversarial Feature Matching for Text Generation ;
(5) Long Text Generation via Adversarial Training with Leaked Information ;
(6) MaskGAN: Better Text Generation via Filling in the______ ;
Xiang, Yang, Yubo, Mohit -
Apr 12 Active Learning (1) Active Learning for Natural Language Processing (Literature Review) ;
(2) Learning how to Active Learn: A Deep Reinforcement Learning Approach ;
(3) Learning How to Actively Learn: A Deep Imitation Learning Approach ;
(4) Deep Bayesian Active Learning for Natural Language Processing: Results of a Large-Scale Empirical Study ;
(5) Learning a Policy for Opportunistic Active Learning ;
Darryl, Tianxiang, Larry, Mohit -
Apr 19 University Holiday (No Class) -- -- -
Apr 26 Final Project Presentations (Last Class) -- All -
May 6 Final Write-Up's Due -- All -



Disclaimer

The professor reserves the right to make changes to the syllabus, including project due dates. These changes will be announced as early as possible.