Overview of the ImageCLEF 2014 Scalable Concept Image Annotation Task

Autores UPV
Año
Revista CEUR Workshop Proceedings

Abstract

The ImageCLEF 2014 Scalable Concept Image Annotation task was the third edition of a challenge aimed at developing more scalable image annotation systems. Unlike traditional image annotation challenges, which rely on a set of manually annotated images as training data, the participants were only allowed to use data and/or resources that as new concepts to detect are introduced do not require significant human effort (such as hand labeling). The participants were provided with web data consisting of 500,000 images, which included textual features obtained from the web pages on which the images appeared, as well as various visual features extracted from the images themselves. To optimize their systems, the participants were provided with a development set of 1,940 samples and its corresponding hand labeled ground truth for 107 concepts. The performance of the submissions was measured using a test set of 7,291 samples which was hand labeled for 207 concepts among which 100 were new concepts unseen during development. In total 11 teams participated in the task submitting overall 58 system runs. Thanks to the larger amount of unseen concepts in the results the generalization of the systems has been more clearly observed and thus demonstrating the potential for scalability.