Dingzeyu Li's headshot

Dingzeyu Li
Dingzeyu Li's name in Chinese charaters

Research Scientist
Adobe Seattle



I am a Research Scientist at Adobe Research. I got my PhD from Columbia University and BEng from HKUST.

I am interested in audiovisual cross-modal media synthesis using tools from computer vision, graphics, deep learning, and HCI. More broadly, I am interested in novel creative authoring/editing applications for everyone. My past research and engineering has been recognized by an Adobe MAX Sneaks Demo (2019), an ACM UIST Best Paper Award (2017), an Adobe Research Fellowship (2017), a NVIDIA PhD Fellowship Finalist (2017), a Shapeways Educational Grant (2016), and an HKUST academic achievement medal (2013). I have served as international program committee members for Eurographics 2020, Graphics Interface 2020, and ACM Multimedia 2019, and as reviewers for various academic conferences including SIGGRAPH, CVPR, ICCV, UIST, CHI, etc.

I’m always looking for interns and collaborators to work on research projects leading to publications/patents/product features. Please feel free to reach out to learn more.


MakeItTalk: Speaker-Aware Talking Head Animation
Yang Zhou, Dingzeyu Li, Xintong Han, Evangelos Kalogerakis, Eli Shechtman, Jose Echevarria
arxiv 2020, in submission
[arxiv] [pdf] [video]

Deep Audio Prior
Yapeng Tian, Chenliang Xu, Dingzeyu Li
arxiv 2019, in submission
[arxiv] [pdf] [code] [demo]

Scene-Aware Audio Rendering via Deep Acoustic Analysis
IEEE VR 2020 (Journal Track) -- conditionally accepted
Zhenyu Tang, Nicholas J. Bryan, Dingzeyu Li, Timothy R. Langlois, Dinesh Manocha
[arxiv] [project page] [video] [code coming soon]

LayerCode: Optical Barcodes for 3D Printed Shapes
Henrique Teles Maia, Dingzeyu Li, Yuan Yang, Changxi Zheng
[pdf] [low-res] [video] [project page with dataset] [code]

Audible Panorama: Automatic Spatial Audio Generation for Panorama Imagery
Haikun Huang, Michael Solah, Dingzeyu Li, Lap-Fai Yu
CHI 2019
[pdf] [low-res] [video] [project page] [sound database]

Scene-Aware Audio for 360° Videos
Dingzeyu Li, Timothy R. Langlois, Changxi Zheng
[pdf] [low-res] [video] [project page]

AirCode: Unobtrusive Physical Tags for Digital Fabrication
Dingzeyu Li, Avinash S. Nair, Shree K. Nayar, Changxi Zheng
UIST 2017
  Best Paper Award
[pdf] [low-res] [video] [slides] [talk] [code] [project page]

Interacting with Acoustic Simulation and Fabrication
Dingzeyu Li
UIST 2017 Doctoral Symposium
[pdf] [arxiv]

Crumpling Sound Synthesis
Gabriel Cirio, Dingzeyu Li, Eitan Grinspun, Miguel A. Otaduy, Changxi Zheng
SIGGRAPH Asia 2016
[pdf] [low-res] [user study code] [video] [project page]

Acoustic Voxels: Computational Optimization of Modular Acoustic Filters
Dingzeyu Li, David I.W. Levin, Wojciech Matusik, Changxi Zheng
[pdf] [low-res] [video] [slides] [project page]

Interactive Acoustic Transfer Approximation for Modal Sound
Dingzeyu Li, Yun Fei, Changxi Zheng
[pdf] [low-res] [video] [slides] [project page]

Expediting Precomputation for Reduced Deformable Simulation
Yin Yang, Dingzeyu Li, Weiwei Xu, Yuan Tian, Changxi Zheng
SIGGRAPH Asia 2015
[pdf] [low-res] [video] [project page]

Motion-Aware KNN Laplacian for Video Matting
Dingzeyu Li, Qifeng Chen, Chi-Keung Tang
ICCV 2013
[pdf] [video] [project page]

KNN Matting
Qifeng Chen, Dingzeyu Li, Chi-Keung Tang
TPAMI 2013
[pdf] [code] [project page]

KNN Matting
Qifeng Chen, Dingzeyu Li, Chi-Keung Tang
CVPR 2012
[pdf] [code] [project page]

Patents, Product Tech Transfers, Demos

Project Sweet Talk: Audio Driven Facial Animation from Single Image
Adobe MAX Sneaks Demo 2019
[video] [TechCrunch] [Adobe blog]
Presenter: Dingzeyu Li
Collaborators: Yang Zhou, Jose Echevarria, Eli Shechtman

Style-Aware Audio-Driven Talking Head Animation From a Single Image
Patent Filed (Pending), 2020
Yang Zhou, Jose Echevarria, Elya Shechtman, Dingzeyu Li

Rendering Scene-Aware Audio Using Neural Network-Based Acoustic Analysis
Patent Filed (Pending), 2019
Zhenyu Tang, Dingzeyu Li, Nicholas J. Bryan, Timothy R. Langlois

Robust Noise-Resilient Automatic Lipsync and Interactive Adjustment
Shipped in Adobe Character Animator 2019 [Release Note]
Dingzeyu Li

Physics-Aware 3D Shape Drop to Ground
Shipped in Adobe Dimension 2019
Dingzeyu Li

Open Source

The following is a list of research and personal projects that I have open sourced.

Deep Audio Prior [github] [demo]

Audio Source Separation using Deep Networks without Training Data

AirCode [github]

computational structured light imaging and customized decoding process

Crumpling Sound User Study [github] [demo]

user study to understand the perceived similarity between simulated and recorded sounds

KNN Matting [github]

foreground extraction and layer reconstruction

SeCluMon (Server Cluster Monitor) [github] [demo]

python-based scripts to monitor our lab cluster CPU, RAM, and other info

3D Models [thingiverse]

3D models in my research and personal projects

HKUST Disney Research Columbia University Adobe Research