Dingzeyu Li's headshot

Dingzeyu Li
李丁泽宇

Research Scientist
Adobe Seattle

ding@dingzeyu.li
CV



News

  • Feb 2020 - I am invited to give a talk on audiovisual for accessibility at Dartmouth College.
    Thanks Prof. Bo Zhu!
  • Dec 2020 - 🎉 Our latest work on making videos accessible has been conditionally accepted to CHI 2021! 🎉
    Congrats to Yujia 👩🏻‍🎓 and my collaborators! More details coming soon!
  • Nov 2020 - I am invited as an panelist to discuss the future of audiovisual research at the VALSE webinar on Nov 25.
    Other speakers and panelists include Di Hu, Chuang Gan, Yapeng Tian, Ruohan Gao, and Hang Zhou. 欢迎大家届时参加
  • Nov 2020 - As the Adobe Fellowship Program co-chairs, Valentina Shin and I hosted an external webinar featuring five past Adobe Fellowship winners and now fulltime employees.
    Learn more and watch the recording at the Adobe Fellowship 🎓 website.
  • More news...

About

I am a Research Scientist at Adobe Research. I got my PhD in Computer Science from Columbia University and BEng from HKUST.



I am interested in audiovisual cross-modal analysis and synthesis for accessibility. Leveraging tools from computer vision, graphics, deep learning, and HCI, I focus on novel creative authoring/editing experiences for everyone. My past research and engineering has been recognized by an Emmy Award 🏆 for Technology and Engineering (2020), two Adobe MAX Sneaks Demos ✨(2019 , 2020), an ACM UIST Best Paper Award 🏅 (2017), an Adobe Research Fellowship 🎓 (2017), a NVIDIA PhD Fellowship Finalist (2017), a Shapeways Educational Grant 🧧 (2016), and an HKUST academic achievement medal 🥇 (2013). I have served as international program committee members for Eurographics 2020/2021, Graphics Interface 2020, and ACM Multimedia 2019, and as reviewers for various academic conferences including SIGGRAPH, CVPR, ICCV, UIST, CHI, etc.



I’m always looking for interns and collaborators to work on research projects leading to publications/patents/product features. Please feel free to reach out to learn more.

在日常研究工作以外,我也制作并主持一档访谈类的播客节目“李丁聊天室”。

Publications

Toward Automatic Audio Description Generation for Accessible Videos
Yujia Wang, Wei Liang, Haikun Huang, Yongqi Zhang, Dingzeyu Li, Lap-Fai Yu
CHI 2021
[pdf]



MakeItTalk: Speaker-Aware Talking Head Animation
Yang Zhou, Xintong Han, Eli Shechtman, Jose Echevarria, Evangelos Kalogerakis, Dingzeyu Li
SIGGRAPH Asia 2020
[arxiv] [video] [code]
[MAX Sneaks demo] [TechCrunch] [Adobe blog]



Unified Multisensory Perception: Weakly-Supervised Audio-Visual Video Parsing
Yapeng Tian, Dingzeyu Li, Chenliang Xu
ECCV 2020 (Spotlight)
[pdf] [arxiv] [project page] [code]
CVPR 2020 Sight and Sound Workshop
[pdf] [video]



Deep Audio Prior
Yapeng Tian, Chenliang Xu, Dingzeyu Li
CVPR 2020 Sight and Sound Workshop
[pdf] [video]
arxiv 2020
[arxiv] [pdf] [code] [demo]



Scene-Aware Audio Rendering via Deep Acoustic Analysis
Zhenyu Tang, Nicholas J. Bryan, Dingzeyu Li, Timothy R. Langlois, Dinesh Manocha
IEEE VR 2020 (Journal Track) / TVCG
[arxiv] [project page] [video] [code]



Scene-Aware Background Music Synthesis
Yujia Wang, Wei Liang, Wanwan Li, Dingzeyu Li, Lap-Fai Yu
ACM Multimedia 2020 (Oral Presentation)
[pdf]



LayerCode: Optical Barcodes for 3D Printed Shapes
Henrique Teles Maia, Dingzeyu Li, Yuan Yang, Changxi Zheng
SIGGRAPH 2019
[pdf] [low-res] [video] [project page with dataset] [code]



Audible Panorama: Automatic Spatial Audio Generation for Panorama Imagery
Haikun Huang, Michael Solah, Dingzeyu Li, Lap-Fai Yu
CHI 2019
[pdf] [low-res] [video] [project page] [sound database]



Scene-Aware Audio for 360° Videos
Dingzeyu Li, Timothy R. Langlois, Changxi Zheng
SIGGRAPH 2018
[pdf] [low-res] [video] [project page]



AirCode: Unobtrusive Physical Tags for Digital Fabrication
Dingzeyu Li, Avinash S. Nair, Shree K. Nayar, Changxi Zheng
UIST 2017
  Best Paper Award
[pdf] [low-res] [video] [slides] [talk] [code] [project page]



Interacting with Acoustic Simulation and Fabrication
Dingzeyu Li
UIST 2017 Doctoral Symposium
[pdf] [arxiv]



Crumpling Sound Synthesis
Gabriel Cirio, Dingzeyu Li, Eitan Grinspun, Miguel A. Otaduy, Changxi Zheng
SIGGRAPH Asia 2016
[pdf] [low-res] [user study code] [video] [project page]



Acoustic Voxels: Computational Optimization of Modular Acoustic Filters
Dingzeyu Li, David I.W. Levin, Wojciech Matusik, Changxi Zheng
SIGGRAPH 2016
[pdf] [low-res] [video] [slides] [project page]



Interactive Acoustic Transfer Approximation for Modal Sound
Dingzeyu Li, Yun Fei, Changxi Zheng
SIGGRAPH 2016
[pdf] [low-res] [video] [slides] [project page]



Expediting Precomputation for Reduced Deformable Simulation
Yin Yang, Dingzeyu Li, Weiwei Xu, Yuan Tian, Changxi Zheng
SIGGRAPH Asia 2015
[pdf] [low-res] [video] [project page]



Motion-Aware KNN Laplacian for Video Matting
Dingzeyu Li, Qifeng Chen, Chi-Keung Tang
ICCV 2013
[pdf] [video] [project page]



KNN Matting
Qifeng Chen, Dingzeyu Li, Chi-Keung Tang
TPAMI 2013
[pdf] [code] [project page]



KNN Matting
Qifeng Chen, Dingzeyu Li, Chi-Keung Tang
CVPR 2012
[pdf] [code] [project page]



(Public) Product Impacts, Tech Transfers, and Demos

Speech-Aware Animation
Shipped in Adobe Character Animator 2020, Sensei-powered ML Feature
[Official Release Note] [Public Beta Release Note]
Press Coverage: [Forbes Coverage] [9to5Mac] [VentureBeat] [EnterpriseTalk] [Animation World Network] [Computer Graphics World] [BlogCritics] [ProVideo Coalition] [Animation Magazine]
Lead Researcher/Developer: Dingzeyu Li



Project On the Beat: An AI-powered music video editing tool for synchronizing body movements to beats
Adobe MAX Sneaks Demo 2020
[video] [Protocol] [Adobe blog]
Presenter: Yang Zhou
Collaborators: Dingzeyu Li, Jun Saito, Deepali Aneja, Jimei Yang



Project Sweet Talk: Audio Driven Facial Animation from Single Image
Adobe MAX Sneaks Demo 2019
[video] [TechCrunch] [Adobe blog]
Presenter: Dingzeyu Li
Collaborators: Yang Zhou, Jose Echevarria, Eli Shechtman



Robust Noise-Resilient Automatic Lipsync and Interactive Adjustment
Shipped in Adobe Character Animator 2019 [Release Note]
Lead Researcher/Developer: Dingzeyu Li



Physics-Aware 3D Shape Drop to Ground
Shipped in Adobe Dimension 2019
Lead Researcher/Developer: Dingzeyu Li



Research Interns/Students Mentored

  1. Arda Senocak, KAIST, 2020
  2. Yang Zhou, UMass Amherst, 2019, 2020
  3. Yujia Wang, Beijing Institute of Technology, 2019, 2020
  4. Yapeng Tian, University of Rochester, 2019
  5. Zhenyu Tang, University of Maryland, 2019
  6. Henrique Maia, Columbia University, 2018
  7. Avinash S. Nair, Columbia University, 2017

Patents

  1. Rendering Scene-Aware Audio Using Neural Network-Based Acoustic Analysis. Filed, 2019.
  2. Style-Aware Audio-Driven Talking Head Animation From a Single Image. Filed, 2020.
  3. Selecting and Performing Operations on Hierarchical Clusters of Video Segments. Filed, 2020
  4. Interacting With Hierarchical Clusters of Video Segments Using a Metadata Panel. Filed, 2020
  5. Segmentation and Hierarchical Clustering of Video. Filed, 2020
  6. Interacting With Hierarchical Clusters of Video Segments Using a Metadata Search. Filed, 2020
  7. Interacting With Hierarchical Clusters of Video Segments Using a Video Timeline. Filed, 2020
  8. Refining Image Acquisition Data Through Domain Adaptation. Filed, 2020


HKUST Disney Research Columbia University Adobe Research