I am an Assistant Professor of Computer Science at the University of North Carolina at Chapel Hill.
Previously I was a Postdoctoral Research Associate in Computer Science & Engineering at University of Washington, working with Prof. Steve Seitz, Prof. Brian Curless and Prof. Ira Kemelmacher-Shlizerman in the UW Reality Lab and GRAIL (2019-2022). I completed my Ph.D. (2013 - 2019) from University of Maryland - College Park (UMD), advised by Prof. David Jacobs and my undergraduate degree (2009-2013) in Electronics and Tele-Communication Engineering from Jadavpur University, Kolkata, India. I also had the pleasure to spend time and work with many amazing researchers from NVIDIA Research, Snapchat Research, The Weizmann Institute of Science (Israel), and TU Dortmund (Germany).
Email: ronisen at cs.unc.edu
Follow @SenguptRoni
Website: Google Scholar
Resume/CV: CV
Office: Sitterson Hall 255, University of North Carolina at Chapel Hill
If you are a Junior or a Senior UG student at UNC interested in pursuing research in my group, reach out to me via email.
Instructor: COMP 776/590: Computer Vision (UNC CS) Spring 2023, Fall 2023
Instructor: COMP 790/590: Neural Rendering (UNC CS) Fall 2022
MVPSNet: Fast Generalizable Multi-view Photometric Stereo We propose generalized approach to multi-view photometric stereo that is significantly better than only multi-view stereo. It produces same reconstruction quality while being 400x faster than per-scene optimization techniques. |
|
Measured Albedo in the Wild: Filling the Gap in Intrinsics Evaluation Existing benchmark (WHDR metric on IIW) for evaluating Intrinsic Image decomposition in the wild are often incomplete as it relies on pair-wise relative human judgements. In order to comprehensively evaluate albedo, we collect a new dataset, Measured Albedo in the Wild (MAW), and propose three new metrics that complement WHDR: intensity, chromaticity and texture metrics. We show that SOTA inverse rendering and intrinsic image decomposition algorithms overfit on WHDR metric and our proposed MAW benchmark can properly evaluate these algorithms that match their visual quality. |
|
My3DGen: Building Lightweight Personalized 3D Generative Model We propose a parameter efficient approach for building personalized 3D generative priors by updating only 0.6 million parameters compared to a full finetuning of 31 million parameters. Personalized 3D generative priors can reconstruct any test image and synthesize novel 3D images of an individual without any test-time optimization or finetuning. |
|
Bringing Telepresence to Every Desk We introduce a novel system that can render high-quality novel views from 4 RGBD camera focused on a tele-conferencing setup. We introduce a novel multiview point cloud rendering algorithm. |
|
Motion Matters: Neural Motion Transfer for Better Camera Physiological Sensing Neural Motion Transfer serves as an effective data augmentation technique for PPG signal estimation from facial videos. We devise the best strategy to augment publicly available datasets with motion augmentation, improving up to 75% over SOTA techniques on five benchmark datasets. |
|
Universal Guidance for Diffusion Models Enables controlling diffusion models by arbitrary guidance modalities without the need to retrain any use-specific components. |
|
A Surface-normal Based Neural Framework for Colonoscopy Reconstruction Using SLAM + near-field Photometric Stereo for 3D colon reconstruction from colonoscopy videos. |
|
Towards Unified Keyframe Propagation Models We present a two-stream approach for video in-painting, where high-frequency features interact locally and low-frequency features interact globally via attention mechanism. |
|
Real-Time Light-Weight Near-Field Photometric Stereo Near-field Photometric Stereo technique is useful for 3D imaging of large objects. We capture multiple images of an object by moving a flashlight and reconstruct the 3D mesh. Our method is significnatly faster and memory-efficient while producing better quality than SOTA methods. |
|
Robust High-Resolution Video Matting with Temporal Guidance Background Removal a.k.a Alpha matting on videos by exploiting temporal information with a recurrent architecture. Does not require capturing background image or manual annotations. |
|
A Light Stage on Every Desk We learn a personalized relighting model by capturing a person watching YouTube videos. Potential application includes relighting during a zoom call. |
|
Shape and Material Capture at Home High-quality Photometric Stereo can be achieved with a simple flashlight. Recovers hi-res geometry and reflectance by progressively refining the predictions at each scale, conditioned on the prediction at previous scale. |
|
Real-Time High Resolution Background Matting Background replacement at 30fps on 4K and 60fps on HD. Alpha matte is first extracted at low-res and then selectively refined with patches. |
|
Lifespan Age Transformation Synthesis Age transformation from 0-70. Continuous aging is modeled by assuming 10 anchor age classes with interpolation in the latent space between them. |
|
Background Matting: The World is Your Green Screen By simply capturing an additional image of the background, alpha matte can be extracted easily without requiring extensive human annotation in form of trimap. |
|
Neural Inverse Rendering of an Indoor Scene from a Single Image Self-supervision on real data is achieved with a Residual Appearnace Renderer network. It can cast shadows, add inter-reflections and near-field lighting, given the normal and albedo of the scene. |
|
SfSNet : Learning Shape, Reflectance and Illuminance of Faces in the Wild Decomposes an unconstrained human face into surface normal, albedo and spherical harmonics lighting. Learns from synthetic 3DMM followed by self-supervised finetuning on unlabelled real images. Soumyadip Sengupta, Daniel Lichy, Angjoo Kanazawa, Carlos D. Castillo, David Jacobs.IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 2020. [Paper] Introduces SfSMesh that utilizes the surface normal predicted by SfSNet to reconstruct a 3D face mesh. |
|
A New Rank Constraint on Multi-view Fundamental Matrices, and its Application to Camera Location Recovery We prove that a matrix formed by stacking fundamental matrices between pairs of images has rank 6. We then introduce a non-linear optimization algorithm based on ADMM, that can better estimate the camera parameters using this rank constraint. This improves Structure-from-Motion algorithms which require initial camera estimation (bundle adjustment). |
|
Solving Uncalibrated Photometric Stereo Using Fewer Images by Jointly Optimizing Low-rank Matrix Completion and Integrability We solve uncalibrated Photometric Stereo using as few as 4-6 images as a rank-constrained non-linear optimization with ADMM. |
|
Frontal to Profile Face Verification in the Wild We introduce a dataset of frontal vs profile face verfication in the wild -- CFP. We show that SOTA face verification algorithms degrade about 10% on frontal-profile verification compared to frontal-frontal. Our dataset has been widely used to improve face verification across poses, but also for face warping and pose synthesis with GAN. |
|
A Frequency Domain Approach to Silhouette Based Gait Recognition |
Constraints and Priors for Inverse Rendering from Limited Observations
Soumyadip Sengupta
Doctoral Thesis, University of Maryland, January 2019
[pdf]