Evaluating performance of different detector/classifier techniques.
1 view (last 30 days)
Show older comments
Dear all,
I am pretty much confuse in evaluating different detector. i.e if I trained 2 different detector on the same training set, and then evaluating them on the same test set, depending on their features/no of classifier/trees (in case of PBT/Random Forests), I will have different score for each detected bounding box.
So, how do people in Computer Vision compare their detector performance i.e. Precision-Recall curve given that the scores are different? Did they used any normalization i.e. setting to {0,1} as I do understand that precision at certain recall point is obtained via adjusting the threshold e.g score ?
Your kind assistance is highly appreciated.
Many thanks
Dhorl Jr'
0 Comments
Answers (1)
Image Analyst
on 23 Sep 2014
You can use ROC curves to evaluate performance, assuming you have ground truth to know what is a correct and incorrect classification. http://en.wikipedia.org/w/index.php?title=Receiver_operating_characteristic&redirect=no
You can use adaboost to combine different classifiers to improve overall performance: http://en.wikipedia.org/wiki/AdaBoost
See Also
Categories
Find more on Recognition, Object Detection, and Semantic Segmentation in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!