-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
question for the results #6
Comments
I print the results by uncommenting the code from line 155 to line 159 in parima.py,
and find that the prediction is always located in (0, 0) (1, 0) (1, 1)tiles , the results are like belows:
|
I think you have not adjusted the height and width of the frame while converting the quaternions to its equirectangular form. |
I change head_orientation_lib.H and head_orientation_lib.W in head_orientation_lib.py as follows, which is user to convert the quaternions to its equirectangular form in get_view.py
And then I convert quaternion to equirectangular form again:
then I run the code of parima again:
But I still cannot get the desired results:
Maybe I have ignored some details when run the code? |
Thank you for you reply, I reset H=300 and W=600 in head_orientation_lib.py, and the Manhattan Error is 0.681, which is little better than 0.685 posted in the paper. One more question, why cannot we set H and W equal to re resolution of rectangular frame? for example, we set H=2048 and W=3840. In the paper “Your Attention is Unique: Detecting 360-Degree Video Saliency in Head-Mounted Display for Head Movement Prediction”, It seems that H and W is resolution of rectangular frame instead of video player, i.e., user’s viewpoint.
… 2021年8月22日 上午3:55,Sarthak Chakraborty ***@***.***> 写道:
I think you have not adjusted the height and width of the frame while converting the quaternions to its equirectangular form.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub <#6 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AL2LOER5IFRQHPWHGW25SQTT6AABTANCNFSM5A6TH44Q>.
|
Thank for your reply. But how can I change ? |
Hello, I have a similar question with you. I change the H and W but still get different results in dataset2. And could you please tell me where you download the dataset1? The link is under 404 error. Did you do anything else to get similar results? |
|
The H and W in the head_orientation_lib is used to get the pixel in the equirectangular frame corresponding to the quaternion. Hence, to get the appropriate pixel representation, the range needs to be selected such that it falls within the frame size of the equirectangular image. Hence the H and W needs to be set equal to the height and width of the corresponding equirectangular frame. |
You mean that H and W in the program are not the height and width of each frame of the video, which are users viewport size? |
H and W are the height and width of the complete equirectangular frame. |
In the program, H and W are 360 and 720, respectively. But I obtain height and weight from the equirectangular frame are 1280 and 2560. |
Kindly change H and W based on the experimental data that you are using. H=360, W=720 is done probably because we were running some other experiments where the equirectangular frame size was 360x720. I hope this answers the question. |
I understand what you mean. Thank you very much! |
Sorry, but I still have a question about the program. What is the relationship between the height, weight, view_height and view_weight in meta.json? Also, H and W. |
(https://dl.acm.org/do/10.1145/3193701/abs/)这个链接里面的数据集压缩文件损坏了,只有部分用户的轨迹数据,能发一份完整版的我吗 |
hello, when I run the instruction as below to get the results for video timelapse in datsest1
I got the reulst as follows:
the Manhattan Error of which is different with table1 in the paper

I don't konw why I got this different result, maybe I use the code in a wrong way?
The text was updated successfully, but these errors were encountered: