Skip to content

[benchmark/WIP] Add ClipShots to the evaluation set #498

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

awkrail
Copy link
Collaborator

@awkrail awkrail commented Mar 10, 2025

Related to #484. ClipShots is the largest dataset for video shot detection. In this PR, I add it to the evaluation set.

@awkrail awkrail changed the title [benchmark] Add ClipShots to the evaluation set [benchmark/WIP] Add ClipShots to the evaluation set Mar 10, 2025
Copy link
Owner

@Breakthrough Breakthrough left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! How big is the dataset / could you run the benchmarks to fill in the table?

I can also tackle that, but wanted to know if you were planning on doing so first.

@awkrail
Copy link
Collaborator Author

awkrail commented Mar 15, 2025

Thanks! How big is the dataset / could you run the benchmarks to fill in the table?

The number of videos (dataset size) is 500, but the annotation styles seem to be different from other datasets, BBC and AutoShots. So, now I am trying to reproduce the performance as with other datasets, yet currently could not. Let me take some time to do it.

I can also tackle that, but wanted to know if you were planning on doing so first.

I plan to add the dataset, evaluate the performance, and fill the scores in the Table in README.md.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants