Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

models for ETH-UCY in meter coordinates #11

Open
DelinquentLeon opened this issue Oct 9, 2024 · 6 comments
Open

models for ETH-UCY in meter coordinates #11

DelinquentLeon opened this issue Oct 9, 2024 · 6 comments

Comments

@DelinquentLeon
Copy link

Thanks for your work!
I wonder if there are models pretrained for ETH-UCY in meter coordinates,cuz the models released online are in the name of xxx-pixel-multimodal-xxxx,or these can be also used for evaluating the ETH-UCY in meter coordinates?

@InhwanBae
Copy link
Owner

Hi @DelinquentLeon,

Thank you for your interest in our work!

The pretrained models are trained using pixel coordinates, but we project these coordinates into meter coordinates for evaluation. This way, our evaluations are directly comparable to other models that use meter coordinates. Therefore, You can directly evaluate the released models on ETH-UCY in meter coordinates, as the projection is part of the evaluation process.

# Homography warping
if cfg.metric == "pixel":
H = homography[scene_id[ped_id]]
all_preds[ped_id] = image2world(all_preds[ped_id], H)
all_gts[ped_id] = pred_traj[ped_id]

Let me know if you need any more clarification!

@DelinquentLeon
Copy link
Author

Thanks for your kind reply! But i meet some other problems when trying to train the model for SDD dataset.
I wondered how is the _reference.png created, is it selected from a random frame? Or is it matter if i choose a random frame to generate the caption for SDD dataset.

@InhwanBae
Copy link
Owner

Hi @DelinquentLeon,

For the SDD dataset, I used the same reference image provided in YNet. The _reference.png file is identical to the reference.jpg found in the annotation folder of the original SDD dataset. Since the SDD videos are static, selecting a random frame should not cause any issues in generating the caption.

@DelinquentLeon
Copy link
Author

Thanks for your kind reply!
I also wondered if you directly used the data file train_trajnet.pkl and test_trajnet.pkl from Y-Net for training and evaluating,cuz i meet some difficulties when preprocessing the SDD dataset,especially aligning the coordinates with the reference picture

@DelinquentLeon
Copy link
Author

Additionally,could you please release the pretrained model on SDD dataset

@InhwanBae
Copy link
Owner

Hi @DelinquentLeon,

Unfortunately, I currently don't have the code and pretrained model for the SDD dataset available at the moment. Due to frequent requests, I'm considering a reimplementation it later this year. You can find more details about the SDD implementation in this issue #7.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants