-
Notifications
You must be signed in to change notification settings - Fork 177
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
issue about the thumos14_test_normalized_proposal_list.txt #13
Comments
If I remember correctly, these 3 videos have incorrect annotations which are sitting beyond the videos’ time span. In terms of testing results, do you have specific numbers and settings for me to look at? |
OK!firstly i only use the 213 videos which really make sense in the evaluation process,and run the |
The performance of RGB and flow you got are both lower than the reference results on our machine. My guess is that it may be due to your modified data loading routines. If you would like to upload the generated proposal lists I may be able to help you. |
Hi Yuanjun, |
@bityangke |
@suhaisheng Hi, I had worse result of flow modality, but I don't know how to fix it? Could you please share your thumos14_flow_score.npz? I just want to verify where is the problem. Thank you very much! |
this is my reproducing result of Flow modality(unzip it then you can get the .npz file).Note that there is still 1.7% different from the paper. |
@suhaisheng Thank you very much!! +Detection Performance on thumos14------+--------+--------+--------+--------+--------+--------+---------+ eval_detection_results.py only need two files, I think if we use the same files, we should get same result. Maybe I should check my code again. |
The result is same as mine(you may mistake my another experiment result(--arch InceptionV3)for it)..What I am curious about is why the RGB modality result you got is higher than mine and even higher than the authors' listed in the paper. |
@suhaisheng Have you figure out why? I found that in thumos14_tag_val_normalized_proposal_list.txt,there are also many videos with no groundtruth. That just makes no sense |
Hi, @suhaisheng. Could you explain what´s the meaning of each line in the thumos14_tag_val_normalized_proposal_list.txt? |
@jiaozizhao |
Hi @yjxiong . Thanks. And could you explain the result after running ssn_test.py? I noted there are four arrays for each video. Could you explain them? If I want to visualize the results, what information I should use? Thanks. |
@jiaozizhao For how are the results evaluated, you can refer to |
Hi @yjxiong. Thank you very much. And sorry for not reading the code carefully due to my urgency. I will read them. |
Hello, do you know how to generate this normalized_proposal_list.txt on other videos |
|
hello guy,I just try to reproduce your amazing work, and for convenience(computational cost),I just use 213 videos instead which are used later for test in thumos14 dataset eval toolkit.But I find that there might be something wrong about your groundtruth annotation in thumos14_tag_test_normalized_proposal_list.txt file.For example, you can check your groundtruth annotations in following three videos:video_test_0001292、video_test_0000270、video_test_0001496.
In your .txt file,these three videos are negative with 0 gt instance,however in thumos14 dataset test annotation , all of them include several groundtruth action instances.So when i run the ssn test python file,my video numbers decrease from 213 to 210,and the final reproducing results tend to be lower than yours listed in the paper(about 1.5% difference).WAITING FOR YOUR REPLY, thx so much!
The text was updated successfully, but these errors were encountered: