Published in

2012 IEEE International Symposium on Circuits and Systems

DOI: 10.1109/iscas.2012.6271635

Links

Tools

Export citation

Search in Google Scholar

Self-learning-based rain streak removal for image/video

Proceedings article published in 2012 by Li-Wei Kang, Chia-Wen Lin ORCID, Che-Tsung Lin, Yu-Chen Lin
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

Rain removal from an image/video is a challenging problem and has been recently investigated extensively. In our previous work, we have proposed the first single-image-based rain streak removal framework via properly formulating it as an image decomposition problem based on morphological component analysis (MCA) solved by performing dictionary learning and sparse coding. However, in this previous work, the dictionary learning process cannot be fully automatic, where the two dictionaries used for rain removal were selected heuristically or by human intervention. In this paper, we extend our previous work to propose an automatic self-learning-based rain streak removal framework for single image. We propose to automatically self-learn the two dictionaries used for rain removal without additional information or any assumption. We then extend our single-image-based method to video-based rain removal in a static scene by exploiting the temporal information of successive frames and reusing the dictionaries learned by the former frame(s) in a video while maintaining the temporal consistency of the video. As a result, the rain component can be successfully removed from the image/video while preserving most original details. Experimental results demonstrate the efficacy of the proposed algorithm.