Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can you share your code for me? #2

Open
LBB-Liuwl opened this issue Oct 14, 2017 · 13 comments
Open

Can you share your code for me? #2

LBB-Liuwl opened this issue Oct 14, 2017 · 13 comments

Comments

@LBB-Liuwl
Copy link

Hi,I'm very interested in your results. Can you share your code for me?

@xh-liu
Copy link
Owner

xh-liu commented Oct 15, 2017

Hi! Thank you for your interest! This project is done by Caffe framework (https://github.com/BVLC/caffe) with little changes to code. We only changes some of the network structure (e.g., adding attention modules), which can be done by simply modifying prototxt. The details of the framework can be found in the paper, and if you have any questions about the structure please feel free to ask me.

@ysc703
Copy link

ysc703 commented Oct 19, 2017

Hi!
In the paper, it said that the HP-net was trained in a stage-wise fashion. And which loss is used when training the M-net and fine-turning the AF-net? Could you share the Caffe prototxt files ?
Thank you!

@xh-liu
Copy link
Owner

xh-liu commented Oct 20, 2017

Hi! In each stage of training, we always use weighted sigmoid cross entropy loss, as illustrated in the paper. The weights for positive and negative examples aim to balance the loss of positive and negative samples. We will detail code and prototxt later. Thank you!

@ysc703
Copy link

ysc703 commented Oct 21, 2017

@xh-liu Thanks!

@chl916185
Copy link

Hi,I'm very interested in your paper. Can you share your code for me?
@xh-liu

@Li1991
Copy link

Li1991 commented Jan 8, 2018

Can you detail code and prototxt? Because what I have done can not reconstruct your result. Thank you very much! @xh-liu

@bilipa
Copy link

bilipa commented Jan 10, 2018

@Li1991 How do you combine the result?

@xh-liu
Copy link
Owner

xh-liu commented Jan 16, 2018

We will give the detailed code later. Thank you for your interest!

@ysc703
Copy link

ysc703 commented Apr 8, 2018

Hi @xh-liu, will the model or the prototxt file be released resently?

@xh-liu
Copy link
Owner

xh-liu commented Apr 8, 2018

I have added some example prototxts in the folder prototxt_example. a0 and a3 are two branches out of total nine branches. You can re-implement the other branches based on it. fusion denotes the whole net for fusing the features from nine branches and the main branch. For computation simplicity, we extract features of the nine branches offline and use the extracted features to train the final fusion layer and classifiers.

@Li1991
Copy link

Li1991 commented Apr 9, 2018

Hi, after seeing your example prototxts, I found you use a layer called NNInterp, but I can not find it. Could you please offer the original code of this layer? Thank you! @xh-liu

@xh-liu
Copy link
Owner

xh-liu commented Apr 9, 2018

@Li1991 I have added the code for nninterp layer in folder 'layers'.

@Li1991
Copy link

Li1991 commented Apr 9, 2018

Thank you for your kindness. @xh-liu And where is your python layer --FeatureConcatDataLayer?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants