Published in

2006 IEEE Southwest Symposium on Image Analysis and Interpretation

DOI: 10.1109/ssiai.2006.1633735

Links

Tools

Export citation

Search in Google Scholar

Unsupervised Object-Based Video Segmentation Using Color And Texture

Proceedings article published in 1 by M. Smith, A. Khotanza, A. Khotanzad
This paper was not found in any repository, but could be made available legally by the author.
This paper was not found in any repository, but could be made available legally by the author.

Full text: Unavailable

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

A new method for the temporal segmentation of video sequences into real-world objects is proposed. First, each frame undergoes a color quantization step by matching like colors extracted from the previously processed frame. JSEG's color variance feature and texture features from the gray-level co-occurrence matrix (GLCM) are both extracted from each color-quantized frame and combined to obtain a more optimal image segmentation. Finally, a validation step is performed between the segmented regions of the currently processed frame and those in the previous frame, thus matching existing objects between frames and automatically detecting new objects upon their entrance into the scene. The new algorithm is tested on various video segments (pans, zooms, close-ups, and multiple-object motion) with results included