-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question About Attentive Feature Network #5
Comments
Hi,
|
Hi, According to the snapshot above, the input "F" is [C,H,W] and the output attention map "a" is [L,H,W]. In this case , L, as your suggestions is 8. |
@xh-liu Could you help us ? T-T |
Hi, have you re-implemented this paper? Can you give a prototxt example? Thank you very much! @ccJia |
@Li1991 I haven't finished it . The AF Net is confusing me.....And I don't know how to implement it. |
@ccJia |
@ccJia Yes, your understanding is right. We get one slice of "a" and do the element-wise multiplication for each channel of "F". |
@xh-liu Thank you ^-^ ! |
@xh-liu Hi,it seems that the @ccJia prototxt has some error else,i think the number of each output about the "a" element-wise multiplicate with the "F" is 24(38),and the total number output to the GAP is 72(243).Which means each of the "a" need element-wise multiplicate with the "F1,F2 and F3",is that right? |
@xh-liu 24(3x8) and 72(24x3),sorry for the typewriting... |
@hezhenjun123 I think the GAP input is (24x3 + 1) which mean (hydra, plus) |
@bilipa yeah, i think you are right! |
Hi Liu,
I want make sure the construct of AF-Net. Could you help me check it?
I modified the branch behind "ch_concat_3a_chconcat" layer and I just use L = 4.
And this is my prototxt for caffe .
layer {
name: "ch_concat_3a_chconcat"
type: "Concat"
bottom: "conv_3a_1x1"
bottom: "conv_3a_3x3"
bottom: "conv_3a_double_3x3_1"
bottom: "conv_3a_proj"
top: "ch_concat_3a_chconcat"
}
layer {
name: "attention_conv_3b_1x1"
type: "Convolution"
bottom: "ch_concat_3a_chconcat"
top: "attention_conv_3b_1x1"
convolution_param {
num_output: 4
kernel_size: 1
stride: 1
pad: 0
}
}
layer {
name: "slice_attention_conv_3b_1x1"
type: "Slice"
bottom: "attention_conv_3b_1x1"
top: "slice_attention_conv_3b_1x1_0"
top: "slice_attention_conv_3b_1x1_1"
top: "slice_attention_conv_3b_1x1_2"
top: "slice_attention_conv_3b_1x1_3"
slice_param {
axis: 1
slice_point: 1
slice_point: 2
slice_point: 3
slice_point: 4
}
}
layer
{
name: "attention_mul_feature_0"
type: "Eltwise"
bottom: "ch_concat_3a_chconcat"
bottom: "slice_attention_conv_3b_1x1_0"
top: "attention_mul_feature_0"
eltwise_param {
operation: PROD
}
}
layer
{
name: "attention_mul_feature_1"
type: "Eltwise"
bottom: "ch_concat_3a_chconcat"
bottom: "slice_attention_conv_3b_1x1_1"
top: "attention_mul_feature_1"
eltwise_param {
operation: PROD
}
}
layer
{
name: "attention_mul_feature_2"
type: "Eltwise"
bottom: "ch_concat_3a_chconcat"
bottom: "slice_attention_conv_3b_1x1_2"
top: "attention_mul_feature_2"
eltwise_param {
operation: PROD
}
}
layer
{
name: "attention_mul_feature_3"
type: "Eltwise"
bottom: "ch_concat_3a_chconcat"
bottom: "slice_attention_conv_3b_1x1_3"
top: "attention_mul_feature_3"
eltwise_param {
operation: PROD
}
}
layer {
name: "attention_3a_chconcat"
type: "Concat"
bottom: "attention_mul_feature_0"
bottom: "attention_mul_feature_1"
bottom: "attention_mul_feature_2"
bottom: "attention_mul_feature_3"
top: "attention_3a_chconcat"
}
Thank you.
The text was updated successfully, but these errors were encountered: