目標檢測比賽---Google AI Open Images - Object Detection Track


https://www.kaggle.com/c/google-ai-open-images-object-detection-track#Evaluation

 

Submissions are evaluated by computing mean Average Precision (AP), modified to take into account the annotation process of Open Images dataset (mean is taken over per-class APs). The metric is described on the Open Images Challenge website.

The final mAP is computed as the average AP over the 500 classes. The participants will be ranked on this final metric.

Kaggle's production code in C# can be viewed here. The metric is also implemented as a part of Tensorflow Object Detection API. See this Tutorial on running the evaluation in Python.

Kernel Submissions

You can make submissions directly from Kaggle Kernels. By adding your teammates as collaborators on a kernel, you can share and edit code privately with them.

Submission File

For each image in the test set, you must predict a list of boxes describing objects in the image. Each box is described as

ImageID,PredictionString
ImageID,{Label Confidence XMin YMin XMax YMax},{...}




tensorflow 自帶評測函數----https://github.com/tensorflow/models/tree/master/research/object_detection
評測函數介紹: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/challenge_evaluation.md



免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM