Dingzeyu Li's headshot

Dingzeyu Li
Dingzeyu Li's name in Chinese charaters

Research Scientist
Adobe Seattle

dinli@adobe.com
CV CV



News

  • July 2020 - Audio-Visual Video Parsing Accepted to ECCV 2020 Spotlight
    Our work on using weak labels to parse audiovisual events from videos is accepted to ECCV 2020. More info to come. Congrats, Yapeng!
  • June 2020 - CVPR 2020 Sight and Sound Workshop
    Yapeng presented his intern projects on Deep Audio Prior and Audio-Visual Video Parsing. See more details here.
  • April 2020 — Internship/Collaboration Going Virtual during COVID-19
    Due to COVID-19, research internships and collaborations are either postponed or only happening virtually for 2019. Stay safe!
  • More...
  • Feb 2020 - Eurographics 2020
    I am serving on the International Program Committee (IPC) of the Eurographics 2020. This year the event will be held in Norrköping, Sweden virtually.
  • January 2020 — I am promoted to Research Scientist 2!
    Thanks to my wonderful collaborators and colleagues!
  • December 2019 — 3D Physics Engine Shipped in Adobe Dimension CC!
    Now you can drop 3D objects in Dimension and we use physics simulation to lay the object flat on the ground. The feature is named "Drop to Ground".
  • November 2019 — Project SweetTalk is selected as an Adobe MAX Sneaks in Los Angeles.
    Project SweetTalk creates dynamic videos from static images. We can animate drawings from centuries ago, random sketches, 2D cartoon characters, Japanese mangas, stylized caricatures, and casual photos. See the demo here.
  • October 2019 — Our Scene-Aware Deep Acoustic Analysis Paper is Accepted to IEEE VR/TVCG!
  • June 2019 — Summer Intern Season Starts!
  • May 2019 — LayerCode is accepted to SIGGRAPH 2019!
  • April 2019 — Noise-Resilient Lip-Sync Shipped in Adobe Character Animator CC!
  • February 2019 — Immersive Audio Talk @ Adobe Global Tech Summit
    I present the latest progress from Adobe Research's audio team to the largest internal event.
  • February 2019 - Physical Hyperlinks Talk @ University of Washington
    Invited by Prof. Adriana Schulz, I gave a talk on Physical Hyperlinks for Personalized Fabrication, a summary of my recent work on Acoustic Voxels, AirCode, and LayerCode.
  • December 2018 - Eurographics 2019 Short Paper Program Committee
  • December 2018 — First CHI Paper on Audible Panorama is Accepted!
  • August 2018 — Scene-Aware Audio in 360 Videos Presented at SIGGRAPH

About

I am a Research Scientist at Adobe Research. I got my PhD from Columbia University and BEng from HKUST.



I am interested in audiovisual cross-modal media synthesis using tools from computer vision, graphics, deep learning, and HCI. More broadly, I am interested in novel creative authoring/editing applications for everyone. My past research and engineering has been recognized by an Adobe MAX Sneaks Demo (2019), an ACM UIST Best Paper Award (2017), an Adobe Research Fellowship (2017), a NVIDIA PhD Fellowship Finalist (2017), a Shapeways Educational Grant (2016), and an HKUST academic achievement medal (2013). I have served as international program committee members for Eurographics 2020, Graphics Interface 2020, and ACM Multimedia 2019, and as reviewers for various academic conferences including SIGGRAPH, CVPR, ICCV, UIST, CHI, etc.



I’m always looking for interns and collaborators to work on research projects leading to publications/patents/product features. Please feel free to reach out to learn more.

Publications

Unified Multisensory Perception: Weakly-Supervised Audio-Visual Video Parsing
Yapeng Tian, Dingzeyu Li, Chenliang Xu
ECCV 2020 (Spotlight)
[full paper and code coming soon]
CVPR 2020 Sight and Sound Workshop
[pdf] [video]



MakeItTalk: Speaker-Aware Talking Head Animation
Yang Zhou, Dingzeyu Li, Xintong Han, Evangelos Kalogerakis, Eli Shechtman, Jose Echevarria
arxiv 2020, in submission
[arxiv] [pdf] [video]



Deep Audio Prior
Yapeng Tian, Chenliang Xu, Dingzeyu Li
arxiv 2019, in submission
[arxiv] [pdf] [code] [demo]



Scene-Aware Audio Rendering via Deep Acoustic Analysis
IEEE VR 2020 (Journal Track) / TVCG
Zhenyu Tang, Nicholas J. Bryan, Dingzeyu Li, Timothy R. Langlois, Dinesh Manocha
[arxiv] [project page] [video] [code coming soon]



LayerCode: Optical Barcodes for 3D Printed Shapes
Henrique Teles Maia, Dingzeyu Li, Yuan Yang, Changxi Zheng
SIGGRAPH 2019
[pdf] [low-res] [video] [project page with dataset] [code]



Audible Panorama: Automatic Spatial Audio Generation for Panorama Imagery
Haikun Huang, Michael Solah, Dingzeyu Li, Lap-Fai Yu
CHI 2019
[pdf] [low-res] [video] [project page] [sound database]



Scene-Aware Audio for 360° Videos
Dingzeyu Li, Timothy R. Langlois, Changxi Zheng
SIGGRAPH 2018
[pdf] [low-res] [video] [project page]



AirCode: Unobtrusive Physical Tags for Digital Fabrication
Dingzeyu Li, Avinash S. Nair, Shree K. Nayar, Changxi Zheng
UIST 2017
  Best Paper Award
[pdf] [low-res] [video] [slides] [talk] [code] [project page]



Interacting with Acoustic Simulation and Fabrication
Dingzeyu Li
UIST 2017 Doctoral Symposium
[pdf] [arxiv]



Crumpling Sound Synthesis
Gabriel Cirio, Dingzeyu Li, Eitan Grinspun, Miguel A. Otaduy, Changxi Zheng
SIGGRAPH Asia 2016
[pdf] [low-res] [user study code] [video] [project page]



Acoustic Voxels: Computational Optimization of Modular Acoustic Filters
Dingzeyu Li, David I.W. Levin, Wojciech Matusik, Changxi Zheng
SIGGRAPH 2016
[pdf] [low-res] [video] [slides] [project page]



Interactive Acoustic Transfer Approximation for Modal Sound
Dingzeyu Li, Yun Fei, Changxi Zheng
SIGGRAPH 2016
[pdf] [low-res] [video] [slides] [project page]



Expediting Precomputation for Reduced Deformable Simulation
Yin Yang, Dingzeyu Li, Weiwei Xu, Yuan Tian, Changxi Zheng
SIGGRAPH Asia 2015
[pdf] [low-res] [video] [project page]



Motion-Aware KNN Laplacian for Video Matting
Dingzeyu Li, Qifeng Chen, Chi-Keung Tang
ICCV 2013
[pdf] [video] [project page]



KNN Matting
Qifeng Chen, Dingzeyu Li, Chi-Keung Tang
TPAMI 2013
[pdf] [code] [project page]



KNN Matting
Qifeng Chen, Dingzeyu Li, Chi-Keung Tang
CVPR 2012
[pdf] [code] [project page]



Patents, Product Tech Transfers, Demos

Project Sweet Talk: Audio Driven Facial Animation from Single Image
Adobe MAX Sneaks Demo 2019
[video] [TechCrunch] [Adobe blog]
Presenter: Dingzeyu Li
Collaborators: Yang Zhou, Jose Echevarria, Eli Shechtman



Style-Aware Audio-Driven Talking Head Animation From a Single Image
Patent Filed (Pending), 2020
Yang Zhou, Jose Echevarria, Elya Shechtman, Dingzeyu Li



Rendering Scene-Aware Audio Using Neural Network-Based Acoustic Analysis
Patent Filed (Pending), 2019
Zhenyu Tang, Dingzeyu Li, Nicholas J. Bryan, Timothy R. Langlois



Robust Noise-Resilient Automatic Lipsync and Interactive Adjustment
Shipped in Adobe Character Animator 2019 [Release Note]
Dingzeyu Li



Physics-Aware 3D Shape Drop to Ground
Shipped in Adobe Dimension 2019
Dingzeyu Li



Interns/Students Mentored

  1. Arda Senocak, KAIST, 2020
  2. Eunjeong Koh, University of California, San Diego, 2020
  3. Yang Zhou, UMass Amherst, 2019, 2020
  4. Yujia Wang, Beijing Institute of Technology, 2019, 2020
  5. Yapeng Tian, University of Rochester, 2019
  6. Zhenyu Tang, University of Maryland, 2019
  7. Henrique Maia, Columbia University, 2018
  8. Avinash S. Nair, Columbia University, 2017

Open Source

The following is a list of research and personal projects that I have open sourced.

Deep Audio Prior [github] [demo]

Audio Source Separation using Deep Networks without Training Data


AirCode [github]

computational structured light imaging and customized decoding process


Crumpling Sound User Study [github] [demo]

user study to understand the perceived similarity between simulated and recorded sounds


KNN Matting [github]

foreground extraction and layer reconstruction


SeCluMon (Server Cluster Monitor) [github] [demo]

python-based scripts to monitor our lab cluster CPU, RAM, and other info


3D Models [thingiverse]

3D models in my research and personal projects



HKUST Disney Research Columbia University Adobe Research