Motion-Aware KNN Laplacian for Video Matting

Dingzeyu Li, Qifeng Chen, Chi-Keung Tang

International Conference on Computer Vision (ICCV) 2013


abstract
This paper demonstrates how the nonlocal principle benefits video matting via the KNN Laplacian, which comes with a straightforward implementation using motion-aware K nearest neighbors. In hindsight, the fundamental problem to solve in video matting is to produce spatio-temporally coherent clusters of moving foreground pixels. When used as described, the motion-aware KNN Laplacian is effective in addressing this fundamental problem, as demonstrated by sparse user markups typically on only one frame in a variety of challenging examples featuring ambiguous foreground and background colors, changing topologies with disocclusion, significant illumination changes, fast motion, and motion blur. When working with existing Laplacian-based systems, we expect our Laplacian can benefit them immediately with an improved clustering of moving foreground pixels.

downloads
Paper / Poster

slides quickview


acknowledgements
Input and output of [2, 4] are courtesy of Xue Bai. Input and output of [8] are courtesy of Yu-Wing Tai. The research was supported by the Hong Kong Research Grant Council under grant number 619313.
bibtex citation
@inproceedings{Li:2013:KVM,
  author={Dingzeyu Li and Qifeng Chen and Chi-Keung Tang},
  booktitle={Computer Vision (ICCV), 2013 IEEE International Conference on},
  title={Motion-Aware KNN Laplacian for Video Matting},
  year={2013},
  month={Dec},
  pages={3599-3606},
  doi={10.1109/ICCV.2013.447},
}