目标检测比赛---Google AI Open Images - Object Detection Track


https://www.kaggle.com/c/google-ai-open-images-object-detection-track#Evaluation

 

Submissions are evaluated by computing mean Average Precision (AP), modified to take into account the annotation process of Open Images dataset (mean is taken over per-class APs). The metric is described on the Open Images Challenge website.

The final mAP is computed as the average AP over the 500 classes. The participants will be ranked on this final metric.

Kaggle's production code in C# can be viewed here. The metric is also implemented as a part of Tensorflow Object Detection API. See this Tutorial on running the evaluation in Python.

Kernel Submissions

You can make submissions directly from Kaggle Kernels. By adding your teammates as collaborators on a kernel, you can share and edit code privately with them.

Submission File

For each image in the test set, you must predict a list of boxes describing objects in the image. Each box is described as

ImageID,PredictionString
ImageID,{Label Confidence XMin YMin XMax YMax},{...}




tensorflow 自带评测函数----https://github.com/tensorflow/models/tree/master/research/object_detection
评测函数介绍: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/challenge_evaluation.md



免责声明!

本站转载的文章为个人学习借鉴使用,本站对版权不负任何法律责任。如果侵犯了您的隐私权益,请联系本站邮箱yoyou2525@163.com删除。



 
粤ICP备18138465号  © 2018-2025 CODEPRJ.COM