Dissemin is shutting down on January 1st, 2025

Published in

2010 IEEE International Conference on Image Processing

DOI: 10.1109/icip.2010.5654330

Links

Tools

Export citation

Search in Google Scholar

Exploiting collective knowledge in an image folksonomy for semantic-based near-duplicate video detection

Journal article published in 2010 by Hyun-Seok Min, Wesley De Neve ORCID, Yong Man Ro
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

An increasing number of duplicates and near-duplicates can be found on websites for video sharing. These duplicates and near-duplicates often infringe copyright or clutter search results. Consequently, a high need exists for techniques that allow identifying duplicates and near-duplicates. In this paper, we propose a semantic-based approach towards the task of identifying near-duplicates. Our approach makes use of semantic video signatures that are constructed by detecting semantic concepts along the temporal axis of video sequences. Specifically, we make use of an image folksonomy (i.e., a set of user-contributed images annotated with user-supplied tags) to detect semantic concepts in video sequences, making it possible to exploit an unrestricted concept vocabulary. Comparative experiments using the MUSCLE-VCD-2007 dataset and folksonomy images retrieved from Flickr show that our approach is successful in identifying near-duplicates.