Abstract

As computer vision datasets grow larger the community is increasingly relying on crowdsourced annotations to train and test our algorithms. Due to the heterogeneous and unpredictable capability of online annotators, various strategies have been proposed to “clean” crowdsourced annotations. However, these strategies typically involve getting more annotations, perhaps different types of annotations (e.g. a grading task), rather than computationally assessing the annotation or image content. In this paper we propose and evaluate several strategies for automatically estimating the quality of a spatial object annotation. We show that one can significantly outperform simple baselines, such as that used by LabelMe, by combining multiple image-based annotation assessment strategies

Paper

Sirion Vittayakorn, James Hays. Quality Assessment for Crownsourced Object Annotations.
British Machine Vision Conference (BMVC) 2011. Dundee, Scotland.
[PDF][Poster]

BibTeX

@inproceedings{Sirion:2011:qacoa,    
    Author    = {Sirion Vittayakorn and James Hays},    
    Title     = {Quality Assessment for Crownsourced Object Annotations},    
    Booktitle = {Proceeding of British Mchine Vision Conference (BMVC)},    
    Year      = {2011},  
}

Download

DatasetSizeDescription
LabelMe Object crops55MBThis database contains 1000 image crops -- 200 each from 5 object categories.
Ground Truth Annotations12MBThe accurate "ground truth" annotation of LabelMe Object crops dataset