Dingzeyu Li's headshot

Dingzeyu Li
Dingzeyu Li's name in Chinese charaters

Research Scientist
Adobe Seattle

dinli@adobe.com
CV CV



News

  • Sept 2020 - At Adobe Video World, I shared our latest public beta release of Speech-Aware Animation in Character Animator.
    Check this release note for more details.
  • Aug 2020 - I give an invited talk at ECCV 2020 Multi-Modal Video Analysis Workshop
    The talk covers our recent work on audio-driven animation from a single image and weakly supervised audiovisual video parsing. Watch the recording here.
  • Aug 2020 - MakeItTalk is conditionally accepted to SIGGRAPH Asia 2020!
    In this work, we develop a method to generate talking head video with an audio clip and a single image. Watch the video in the publication list. Congrats, Yang!
  • More...
  • July 2020 - Audio-Visual Video Parsing Accepted to ECCV 2020 Spotlight
    Our work on using weak labels to parse audiovisual events from videos is accepted to ECCV 2020. More info to come. Congrats, Yapeng!
  • June 2020 - CVPR 2020 Sight and Sound Workshop
    Yapeng presented his intern projects on Deep Audio Prior and Audio-Visual Video Parsing. See more details here.
  • April 2020 — Internship/Collaboration Going Virtual during COVID-19
    Due to COVID-19, research internships and collaborations are either postponed or only happening virtually for 2019. Stay safe!
  • Feb 2020 - Eurographics 2020
    I am serving on the International Program Committee (IPC) of the Eurographics 2020. This year the event will be held in Norrköping, Sweden virtually.
  • January 2020 — I am promoted to Research Scientist 2!
    Thanks to my wonderful collaborators and colleagues!
  • Dec 2019 — 3D Physics Engine Shipped in Adobe Dimension CC!
    Now you can drop 3D objects in Dimension and we use physics simulation to lay the object flat on the ground. The feature is named "Drop to Ground".
  • Nov 2019 — Project SweetTalk is selected as an Adobe MAX Sneaks in Los Angeles.
    Project SweetTalk creates dynamic videos from static images. We can animate drawings from centuries ago, random sketches, 2D cartoon characters, Japanese mangas, stylized caricatures, and casual photos. See the demo here.
  • Oct 2019 — Our Scene-Aware Deep Acoustic Analysis Paper is Accepted to IEEE VR/TVCG!
  • June 2019 — Summer Intern Season Starts!
  • May 2019 — LayerCode is accepted to SIGGRAPH 2019!
  • April 2019 — Noise-Resilient Lip-Sync Shipped in Adobe Character Animator CC!
  • Feb 2019 — Immersive Audio Talk @ Adobe Global Tech Summit
    I present the latest progress from Adobe Research's audio team to the largest internal event.
  • Feb 2019 - Physical Hyperlinks Talk @ University of Washington
    Invited by Prof. Adriana Schulz, I gave a talk on Physical Hyperlinks for Personalized Fabrication, a summary of my recent work on Acoustic Voxels, AirCode, and LayerCode.
  • Dec 2018 - Eurographics 2019 Short Paper Program Committee
  • Dec 2018 — First CHI Paper on Audible Panorama is Accepted!
  • Aug 2018 — Scene-Aware Audio in 360 Videos Presented at SIGGRAPH

About

I am a Research Scientist at Adobe Research. I got my PhD from Columbia University and BEng from HKUST.



I am interested in audiovisual cross-modal media synthesis using tools from computer vision, graphics, deep learning, and HCI. More broadly, I am interested in novel creative authoring/editing applications for everyone. My past research and engineering has been recognized by an Adobe MAX Sneaks Demo (2019), an ACM UIST Best Paper Award (2017), an Adobe Research Fellowship (2017), a NVIDIA PhD Fellowship Finalist (2017), a Shapeways Educational Grant (2016), and an HKUST academic achievement medal (2013). I have served as international program committee members for Eurographics 2020, Graphics Interface 2020, and ACM Multimedia 2019, and as reviewers for various academic conferences including SIGGRAPH, CVPR, ICCV, UIST, CHI, etc.



I’m always looking for interns and collaborators to work on research projects leading to publications/patents/product features. Please feel free to reach out to learn more.

在日常研究工作以外,我也主持一档访谈类的播客节目“李丁聊天室”。

Publications

MakeItTalk: Speaker-Aware Talking Head Animation
Yang Zhou, Xintong Han, Eli Shechtman, Jose Echevarria, Evangelos Kalogerakis, Dingzeyu Li
SIGGRAPH Asia 2020 (Conditionally Accepted)
[arxiv] [video] [code coming soon]
[MAX Sneaks demo] [TechCrunch] [Adobe blog]



Unified Multisensory Perception: Weakly-Supervised Audio-Visual Video Parsing
Yapeng Tian, Dingzeyu Li, Chenliang Xu
ECCV 2020 (Spotlight)
[pdf] [arxiv] [project page] [code]
CVPR 2020 Sight and Sound Workshop
[pdf] [video]



Deep Audio Prior
Yapeng Tian, Chenliang Xu, Dingzeyu Li
CVPR 2020 Sight and Sound Workshop
[pdf] [video]
arxiv 2020
[arxiv] [pdf] [code] [demo]



Scene-Aware Audio Rendering via Deep Acoustic Analysis
Zhenyu Tang, Nicholas J. Bryan, Dingzeyu Li, Timothy R. Langlois, Dinesh Manocha
IEEE VR 2020 (Journal Track) / TVCG
[arxiv] [project page] [video] [code]



Scene-Aware Background Music Synthesis
Yujia Wang, Wei Liang, Wanwan Li, Dingzeyu Li, Lap-Fai Yu
ACM Multimedia 2020 (Oral Presentation)
[pdf]



LayerCode: Optical Barcodes for 3D Printed Shapes
Henrique Teles Maia, Dingzeyu Li, Yuan Yang, Changxi Zheng
SIGGRAPH 2019
[pdf] [low-res] [video] [project page with dataset] [code]



Audible Panorama: Automatic Spatial Audio Generation for Panorama Imagery
Haikun Huang, Michael Solah, Dingzeyu Li, Lap-Fai Yu
CHI 2019
[pdf] [low-res] [video] [project page] [sound database]



Scene-Aware Audio for 360° Videos
Dingzeyu Li, Timothy R. Langlois, Changxi Zheng
SIGGRAPH 2018
[pdf] [low-res] [video] [project page]



AirCode: Unobtrusive Physical Tags for Digital Fabrication
Dingzeyu Li, Avinash S. Nair, Shree K. Nayar, Changxi Zheng
UIST 2017
  Best Paper Award
[pdf] [low-res] [video] [slides] [talk] [code] [project page]



Interacting with Acoustic Simulation and Fabrication
Dingzeyu Li
UIST 2017 Doctoral Symposium
[pdf] [arxiv]



Crumpling Sound Synthesis
Gabriel Cirio, Dingzeyu Li, Eitan Grinspun, Miguel A. Otaduy, Changxi Zheng
SIGGRAPH Asia 2016
[pdf] [low-res] [user study code] [video] [project page]



Acoustic Voxels: Computational Optimization of Modular Acoustic Filters
Dingzeyu Li, David I.W. Levin, Wojciech Matusik, Changxi Zheng
SIGGRAPH 2016
[pdf] [low-res] [video] [slides] [project page]



Interactive Acoustic Transfer Approximation for Modal Sound
Dingzeyu Li, Yun Fei, Changxi Zheng
SIGGRAPH 2016
[pdf] [low-res] [video] [slides] [project page]



Expediting Precomputation for Reduced Deformable Simulation
Yin Yang, Dingzeyu Li, Weiwei Xu, Yuan Tian, Changxi Zheng
SIGGRAPH Asia 2015
[pdf] [low-res] [video] [project page]



Motion-Aware KNN Laplacian for Video Matting
Dingzeyu Li, Qifeng Chen, Chi-Keung Tang
ICCV 2013
[pdf] [video] [project page]



KNN Matting
Qifeng Chen, Dingzeyu Li, Chi-Keung Tang
TPAMI 2013
[pdf] [code] [project page]



KNN Matting
Qifeng Chen, Dingzeyu Li, Chi-Keung Tang
CVPR 2012
[pdf] [code] [project page]



Product Impacts, Tech Transfers, and Demos

Speech-Aware Animation
Shipped in Adobe Character Animator Public Beta 2020 [Release Note]
Press Coverage: [9to5Mac] [VentureBeat] [EnterpriseTalk] [Animation World Network] [Computer Graphics World] [BlogCritics] [ProVideo Coalition] [Animation Magazine]
Dingzeyu Li



Project Sweet Talk: Audio Driven Facial Animation from Single Image
Adobe MAX Sneaks Demo 2019
[video] [TechCrunch] [Adobe blog]
Presenter: Dingzeyu Li
Collaborators: Yang Zhou, Jose Echevarria, Eli Shechtman



Robust Noise-Resilient Automatic Lipsync and Interactive Adjustment
Shipped in Adobe Character Animator 2019 [Release Note]
Dingzeyu Li



Physics-Aware 3D Shape Drop to Ground
Shipped in Adobe Dimension 2019
Dingzeyu Li



Interns/Students Mentored

  1. Arda Senocak, KAIST, 2020
  2. Eunjeong Koh, University of California, San Diego, 2020
  3. Yang Zhou, UMass Amherst, 2019, 2020
  4. Yujia Wang, Beijing Institute of Technology, 2019, 2020
  5. Yapeng Tian, University of Rochester, 2019
  6. Zhenyu Tang, University of Maryland, 2019
  7. Henrique Maia, Columbia University, 2018
  8. Avinash S. Nair, Columbia University, 2017

Patents

  1. Style-Aware Audio-Driven Talking Head Animation From a Single Image. Filed, 2019.
  2. Rendering Scene-Aware Audio Using Neural Network-Based Acoustic Analysis. Filed, 2019.


HKUST Disney Research Columbia University Adobe Research