Dingzeyu Li's headshot

Dingzeyu Li
李丁泽宇

Research Scientist
Adobe Seattle

Contact Info
CV



News

Internship and collaboration at Adobe Research (flexible schedule in 2022): if you are a PhD student interested in a research internship on audiovisual & accessibility, please send me an email with your CV and a summary of your research interests.

About

I am a Research Scientist at Adobe Research. I got my PhD in Computer Science from Columbia University and BEng from HKUST.



I am interested in audiovisual cross-modal analysis and synthesis for accessibility. Leveraging tools from computer vision, graphics, deep learning, and HCI, I focus on novel creative authoring/editing experiences for everyone. My past research and engineering has been recognized by an Emmy Award 🏆 for Technology and Engineering (2020), two Adobe MAX Sneaks Demos ✨(2019 , 2020), an ACM UIST Best Paper Award 🏅 (2017), an Adobe Research Fellowship 🎓 (2017), a NVIDIA PhD Fellowship Finalist (2017), a Shapeways Educational Grant 🧧 (2016), and an HKUST academic achievement medal 🥇 (2013). I have served as international program committee members for Eurographics 2020/2021, Graphics Interface 2020, and ACM Multimedia 2019, and as reviewers for various academic conferences including SIGGRAPH, CVPR, ICCV, UIST, CHI, etc. I am also serving as the Adobe Fellowship program co-chair with Valentina Shin, recognizing outstanding PhD students conducting exceptional research.



I’m always looking for interns and collaborators to work on research projects leading to publications/patents/product features. Please feel free to reach out to learn more.

在日常研究工作以外,我也制作并主持一档访谈类的播客节目“李丁聊天室”。

Publications

Can one hear the shape of a neural network?: Snooping the GPU via Magnetic Side Channel
Henrique Teles Maia, Chang Xiao, Dingzeyu Li, Eitan Grinspun, Changxi Zheng
USENIX Security 2022
[final pdf coming soon] [previous ICLR submission]



Toward Automatic Audio Description Generation for Accessible Videos
Yujia Wang, Wei Liang, Haikun Huang, Yongqi Zhang, Dingzeyu Li, Lap-Fai Yu
CHI 2021
[pdf]



MakeItTalk: Speaker-Aware Talking Head Animation
Yang Zhou, Xintong Han, Eli Shechtman, Jose Echevarria, Evangelos Kalogerakis, Dingzeyu Li
SIGGRAPH Asia 2020
[arxiv] [video] [code]
[MAX Sneaks demo] [TechCrunch] [Adobe blog]



Unified Multisensory Perception: Weakly-Supervised Audio-Visual Video Parsing
Yapeng Tian, Dingzeyu Li, Chenliang Xu
ECCV 2020 (Spotlight)
[pdf] [arxiv] [project page] [code]
CVPR 2020 Sight and Sound Workshop
[pdf] [video]



Deep Audio Prior
Yapeng Tian, Chenliang Xu, Dingzeyu Li
CVPR 2020 Sight and Sound Workshop
[pdf] [video]
arxiv 2020
[arxiv] [pdf] [code] [demo]



Scene-Aware Audio Rendering via Deep Acoustic Analysis
Zhenyu Tang, Nicholas J. Bryan, Dingzeyu Li, Timothy R. Langlois, Dinesh Manocha
IEEE VR 2020 (Journal Track) / TVCG
[arxiv] [project page] [video] [code]



Scene-Aware Background Music Synthesis
Yujia Wang, Wei Liang, Wanwan Li, Dingzeyu Li, Lap-Fai Yu
ACM Multimedia 2020 (Oral Presentation)
[pdf]



LayerCode: Optical Barcodes for 3D Printed Shapes
Henrique Teles Maia, Dingzeyu Li, Yuan Yang, Changxi Zheng
SIGGRAPH 2019
[pdf] [low-res] [video] [project page with dataset] [code]



Audible Panorama: Automatic Spatial Audio Generation for Panorama Imagery
Haikun Huang, Michael Solah, Dingzeyu Li, Lap-Fai Yu
CHI 2019
[pdf] [low-res] [video] [project page] [sound database]



Scene-Aware Audio for 360° Videos
Dingzeyu Li, Timothy R. Langlois, Changxi Zheng
SIGGRAPH 2018
[pdf] [low-res] [video] [project page]



AirCode: Unobtrusive Physical Tags for Digital Fabrication
Dingzeyu Li, Avinash S. Nair, Shree K. Nayar, Changxi Zheng
UIST 2017
  Best Paper Award
[pdf] [low-res] [video] [slides] [talk] [code] [project page]



Interacting with Acoustic Simulation and Fabrication
Dingzeyu Li
UIST 2017 Doctoral Symposium
[pdf] [arxiv]



Crumpling Sound Synthesis
Gabriel Cirio, Dingzeyu Li, Eitan Grinspun, Miguel A. Otaduy, Changxi Zheng
SIGGRAPH Asia 2016
[pdf] [low-res] [user study code] [video] [project page]



Acoustic Voxels: Computational Optimization of Modular Acoustic Filters
Dingzeyu Li, David I.W. Levin, Wojciech Matusik, Changxi Zheng
SIGGRAPH 2016
[pdf] [low-res] [video] [slides] [project page]



Interactive Acoustic Transfer Approximation for Modal Sound
Dingzeyu Li, Yun Fei, Changxi Zheng
SIGGRAPH 2016
[pdf] [low-res] [video] [slides] [project page]



Expediting Precomputation for Reduced Deformable Simulation
Yin Yang, Dingzeyu Li, Weiwei Xu, Yuan Tian, Changxi Zheng
SIGGRAPH Asia 2015
[pdf] [low-res] [video] [project page]



Motion-Aware KNN Laplacian for Video Matting
Dingzeyu Li, Qifeng Chen, Chi-Keung Tang
ICCV 2013
[pdf] [video] [project page]



KNN Matting
Qifeng Chen, Dingzeyu Li, Chi-Keung Tang
TPAMI 2013
[pdf] [code] [project page]



KNN Matting
Qifeng Chen, Dingzeyu Li, Chi-Keung Tang
CVPR 2012
[pdf] [code] [project page]



(Public) Product Impacts, Tech Transfers, and Demos

Speech-Aware Animation
Shipped in Adobe Character Animator 2020, Sensei-powered ML Feature
[Official Release Note] [Public Beta Release Note]
Press Coverage: [Forbes Coverage] [9to5Mac] [VentureBeat] [EnterpriseTalk] [Animation World Network] [Computer Graphics World] [BlogCritics] [ProVideo Coalition] [Animation Magazine]
Lead Researcher/Developer: Dingzeyu Li



Project On the Beat: An AI-powered music video editing tool for synchronizing body movements to beats
Adobe MAX Sneaks Demo 2020
[video] [Protocol] [Adobe blog]
Presenter: Yang Zhou
Collaborators: Dingzeyu Li, Jun Saito, Deepali Aneja, Jimei Yang



Project Sweet Talk: Audio Driven Facial Animation from Single Image
Adobe MAX Sneaks Demo 2019
[video] [TechCrunch] [Adobe blog]
Presenter: Dingzeyu Li
Collaborators: Yang Zhou, Jose Echevarria, Eli Shechtman



Robust Noise-Resilient Automatic Lipsync and Interactive Adjustment
Shipped in Adobe Character Animator 2019 [Release Note]
Lead Researcher/Developer: Dingzeyu Li



Physics-Aware 3D Shape Drop to Ground
Shipped in Adobe Dimension 2019
Lead Researcher/Developer: Dingzeyu Li



Research Interns/Students Mentored

  1. Oliver Alonzo, Rochester Institute of Technology, 2021 -- Topic: Tools for Making Videos Accessible to Deaf and Hard-of-Hearing People
  2. Yapeng Tian, University of Rochester, 2021 -- Topic: Tools for Making Videos Accessible to the Blind and Visually Impaired
  3. Yujia Wang, Beijing Institute of Technology, 2020 -- Topic: Automatic Audio Description Synthesis for the Blind and Visually Impaired
  4. Arda Senocak, KAIST, 2020 -- Topic: Causality Analysis in Audiovisual Understanding
  5. Henrique Maia, Columbia University, 2020 -- Topic: Snooping the GPU and Neural Network via Magnetic Side Channel
  6. Yang Zhou, UMass Amherst, 2020 -- Topic: Audio-driven Upper Body Gesture Synthesis
  7. Yang Zhou, UMass Amherst, 2019 -- Topic: Audio-driven Facial and Talking Head Synthesis
  8. Yapeng Tian, University of Rochester, 2019 -- Topic: Deep Audio Prior & Multimodal Video Parsing
  9. Zhenyu Tang, University of Maryland, 2019 -- Topic: Room Acoustic Analysis with Deep Learning
  10. Yujia Wang, Beijing Institute of Technology, 2019 -- Topic: Automatic Scene-Aware Background Music Synthesis
  11. Henrique Maia, Columbia University, 2018 -- Topic: Invisible Tags in 3D Printing via Layer-by-Layer Nature
  12. Avinash S. Nair, Columbia University, 2017 -- Topic: Invisible Tags in 3D Printing via Subsurface Scattering

Patents

  1. Rendering Scene-Aware Audio Using Neural Network-Based Acoustic Analysis. Filed, 2019.
  2. Style-Aware Audio-Driven Talking Head Animation From a Single Image. Filed, 2020.
  3. Selecting and Performing Operations on Hierarchical Clusters of Video Segments. Filed, 2020
  4. Interacting With Hierarchical Clusters of Video Segments Using a Metadata Panel. Filed, 2020
  5. Segmentation and Hierarchical Clustering of Video. Filed, 2020
  6. Interacting With Hierarchical Clusters of Video Segments Using a Metadata Search. Filed, 2020
  7. Interacting With Hierarchical Clusters of Video Segments Using a Video Timeline. Filed, 2020
  8. Refining Image Acquisition Data Through Domain Adaptation. Filed, 2020
  9. Re-Timing a Video Sequence to an Audio Sequence Based on Motion and Audio Beat Detection. Filed, 2021


HKUST Disney Research Columbia University Adobe Research