-
Notifications
You must be signed in to change notification settings - Fork 293
Description
After seeing the paper "PopSift: a faithful SIFT implementation for real-time applications" in which the authors claim CudaSift to perform exceptionally poor with respect to scale changes, I got a bit worried and had to make some tests to verify the claim, using the benchmark code from the paper "A comparison of affine region detectors". Unfortunately, I didn't manage to replicate the results.
The graphs below show the repeatability and number of correspondences for image pairs in the 'bark' image set. The exact number of correspondences can vary quite a bit depending on what threshold you set, but the repeatability should be relatively stable. Also, note that I didn't upscale the image in this test. I haven't yet tried to benchmark the descriptor. If CudaSift performs worse than e.g. VLFeat then it's much more likely to be due to the descriptor. I don't really know why the results differ, but if someone has I clue, I would be glad to hear.
Repeatability 9-point filters
Number of correspondences 9-point filters
Features from the first 'bark' image
Features from the last 'bark' image