Licheng Yu

My name is Licheng Yu (虞立成). I completed my PhD in Computer science from University of North Carolina at Chapel Hill in 2019 May. My advisor is Tamara L. Berg. I also work closely with Mohit Bansal during my PhD study. My research interest lies in computer vision and natural language processing.

I completed my Master's degrees from both Georgia Tech and Shanghai Jiaotong University in 2014. I received my bachelor's degree from Shanghai Jiao Tong University.

Email: licheng [at] cs.unc.edu
Office: 201 S. Columbia St., Rm-257, UNC-Chapel Hill, NC 27599-3175
More info: [Resume], [Google Scholar], [LinkedIn], [GitHub].

Updates

Work Experience

2019.07—Future : Researcher

2014.08—2019.05: UNC-CH Research Assistant (Graduated in 2019.05)

2018.05—2018.08: FAIR Research Intern

2017.05—2017.08: Adobe Research Intern

2016.05—2016.08: eBay Research Intern

2011.09—2013.04: SJTU Research Assistant

Projects & Publications

TVQA+: Spatio-Temporal Grounding for Video Question Answering
arxiv:1904.11574
Jie Lei, Licheng Yu, Tamara L. Berg, Mohit Bansal
Multi-Target Embodied Question Answering
CVPR 2019
Licheng Yu, Xinlei Chen, Georgia Gkioxari, Mohit Bansal, Tamara L. Berg, Dhruv Batra
[Paper] [Video]
Learning to Navigate Unseen Environments: Back Translation with Environmental Dropout
NAACL 2019
Hao Tan, Licheng Yu, Mohit Bansal
[Paper] [Code]
TVQA: Localized Compositional Video Question Answering
EMNLP 2018
Jie Lei, Licheng Yu, Mohit Bansal, Tamara L. Berg
[Paper] [Project] [Explore] (Oral)
MAttNet: Modular Attention Network for Referring Expression Comprehension
CVPR 2018
Licheng Yu, Zhe Lin, Xiaohui Shen, Jimei Yang, Xin Lu, Mohit Bansal, Tamara L. Berg
From Image to Language and Back Again
Journal of Natural Language Engineering (JNLE), 2018
Anya Belz, Tamara L. Berg, Licheng Yu
[Paper]
Physics-Inspired Garment Recovery from a Single-View Image
ACM Transactions on Graphics, 2018
Shan Yang, Tanya Ambert, Zherong Pan, Ke Wang, Licheng Yu, Tamara L. Berg, Ming C. Lin
A Unified Framework for Manifold Landmarking
IEEE Transactions on Signal Processing, 2018
Hongteng Xu, Licheng Yu, Mark Davenport, Hongyuan Zha
[Paper]
Hierarchically-Attentive RNN for Album Summarization and Storytelling
EMNLP 2017
Licheng Yu, Mohit Bansal, Tamara L. Berg
A Joint Speaker-Listener-Reinforcer Model for Referring Expressions
CVPR 2017
Licheng Yu, Hao Tan, Mohit Bansal, Tamara L. Berg
[Paper] [Code] [Project] [Talk] (Spotlight presentation 8%)
Modeling Context in Referring Expressions
ECCV 2016
Licheng Yu, Patrick Poirson, Shan Yang, Alexander C. Berg, Tamara L. Berg
[Paper] [Dataset] [Talk] (Spotlight presentation 4.7%)
Visual Madlibs: Fill-in-the-blank Image Description and Question Answering
ICCV 2015
Licheng Yu, Eunbyung Park, Alexander C. Berg, Tamara L. Berg
Dictionary Learning with Mutually Reinforcing Group-Graph Structures
AAAI 2015
Licheng Yu*, Hongteng Xu*, Hongyuan Zha, Yi Xu
(* denotes equal contribution)
[Paper]
Vector Sparse Representation of Color Image Using Quaternion Matrix Analysis
IEEE Transactions on Image Processing, TIP 2015
Yi Xu, Licheng Yu, Hongteng Xu, Truong Nguyen, Hao Zhang
[Paper][Code]
Quaternion-based Sparse Representation of Color Image
IEEE International Conference on Multimedia and Expo, ICME 2013
Licheng Yu, Yi Xu, Hongteng Xu, Hao Zhang
[Paper][Supplementary File] (Oral presentation)
Single Image Super-resolution via Phase Congruency Analysis
IEEE Visual Communications and Image Processing, VCIP 2013
Licheng Yu, Yi Xu, Bo Zhang
[Paper] (Oral presentation)
Self-Example Based Super-resolution with Fractal-based Gradient Enhancement
IEEE International Conference on Multimedia and Expo, ICME workshop 2013
Licheng Yu, Yi Xu, Hongteng Xu
[Paper]
Robust Single Image Super-resolution based on Gradient Enhancement
APSIPA Annual Summit and Conference, APSIPA 2012
Licheng Yu, Yi Xu, Hongteng Xu, Xiaokang Yang

Miscellaneous

Gobang Android App (AI mode + 2-player mode)
Licheng Yu
Skill Measurement via Egocentric Vision in Wetlab
Licheng Yu, Yin Li, James Rehg


PhD Thesis: "Question Answering, Grounding, and Generation for Vision and Language" [PDF][Talk]