Akshay Paruchuri

I'm a 3rd-year PhD student in the Department of Computer Science at UNC Chapel Hill. I'm advised by Soumyadip "Roni" Sengupta. I'm currently interested in research at the intersection of computer vision, computer graphics, machine learning, and healthcare. In addition to healthcare applications, I'm also interested in and occasionally get to work on projects involving augmented reality, virtual reality, and robotics.

Prior to graduate school, I developed my interest in wearable technologies and my skills as an engineer by working in Nike's Advanced Innovation Team. I completed my undergraduate education in the Department of Electrical and Computer Engineering at NC State University.

I'm happy to chat about research, new opportunities, and life in general with just about anyone. Feel free to contact me via email or Twitter.

Email  /  CV  /  Google Scholar  /  Twitter  /  Github

profile photo
Research and Publications

To see a full, most recent list of my publications, please refer to my Google Scholar.

* denotes equal contribution for joint authorship or advising

rPPG-Toolbox: Deep Remote PPG Toolbox
Xin Liu, Akshay Paruchuri*, Girish Narayanswamy*, Xiaoyu Zhang, Jiankai Tang, Yuzhe Zhang, Yunato Wang, Soumyadip Sengupta, Shwetak Patel, Daniel McDuff
NeurIPS 2023
Datasets and Benchmarks Track

We present a comprehensive toolbox, rPPG-Toolbox, that contains unsupervised and supervised rPPG models with support for public benchmark datasets, data augmentation, and systematic evaluation.

Motion Matters: Neural Motion Transfer for Better Camera Physiological Measurement
Akshay Paruchuri, Xin Liu, Yulu Pan, Shwetak Patel, Daniel McDuff*, Soumyadip Sengupta*
WACV 2024
Oral, Top 2.6% (53 of 2042 submissions)
[PDF][Project Page][arXiv][Code]

Neural Motion Transfer serves as an effective data augmentation technique for PPG signal estimation from facial videos. We devise the best strategy to augment publicly available datasets with motion augmentation, improving up to 79% in inter-dataset testing involving five benchmark datasets and 47% over existing results using SOTA methods on PURE.

Reconstruction of Human Body Pose and Appearance Using Body-Worn IMUs and a Nearby Camera View for Collaborative Egocentric Telepresence
Qian Zhang, Akshay Paruchuri, YoungWoon Cha, Jia-Bin Huang, Jade Kandel, Howard Jiang, Adrian Ilie, Andrei State, Danielle Szafir, Daniel Szafir, Henry Fuchs
2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)
[IEEE Xplore]

We present a collaborative approach to 3D reconstruction that combines a set of IMUs worn by a target person with an external view from another nearby person wearing an AR headset, used for estimating the target person's body pose and reconstructing their appearance, respectively.


In my free time these days, I enjoy exploring the world with my partner, finding ways to stay physically active, reading, and playing the occasional computer game (games like DotA 2 are my go-to). I recently have been trying to get into running. Maybe I'll do a marathon one day!

A quote that recently inspired me comes from a speech by Theodore Roosevelt (former President of the United States) titled "Citizenship in a Republic". It stirs me in a similar fashion as Max Ehrmann's Desiderata. The quote is as follows:

It is not the critic who counts; not the man who points out how the strong man stumbles, or where the doer of deeds could have done them better. The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood; who strives valiantly; who errs, who comes short again and again, because there is no effort without error and shortcoming; but who does actually strive to do the deeds; who knows the great enthusiasms, the great devotions; who spends himself in a worthy cause; who at the best knows in the end the triumph of high achievement, and who at the worst, if he fails, at least fails while daring greatly, so that his place shall never be with those cold and timid souls who neither know victory nor defeat.
You can read the full speech here.

Website Credits to Xin Liu (source code) and Jon Barron (source code)