Skip to content

Repeatability test #61

@Celebrandil

Description

@Celebrandil

After seeing the paper "PopSift: a faithful SIFT implementation for real-time applications" in which the authors claim CudaSift to perform exceptionally poor with respect to scale changes, I got a bit worried and had to make some tests to verify the claim, using the benchmark code from the paper "A comparison of affine region detectors". Unfortunately, I didn't manage to replicate the results.

The graphs below show the repeatability and number of correspondences for image pairs in the 'bark' image set. The exact number of correspondences can vary quite a bit depending on what threshold you set, but the repeatability should be relatively stable. Also, note that I didn't upscale the image in this test. I haven't yet tried to benchmark the descriptor. If CudaSift performs worse than e.g. VLFeat then it's much more likely to be due to the descriptor. I don't really know why the results differ, but if someone has I clue, I would be glad to hear.

Repeatability 9-point filters
repeat
Number of correspondences 9-point filters
corresp
Features from the first 'bark' image
sift1
Features from the last 'bark' image
sift6

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions