You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First, doesn't the code in Line 97 and Line 98 have some problems about the index?
here is the code from Line 93 to Line 101
#Modify output to backprop gradient based on network outputsh=out.squeeze().shapeconf_softmax=np.zeros((1,sh[0],sh[1],sh[2]))
ifargs.type=='one-hot':
foriinnp.arange(sh[0]): # should be sh[1] ? forjinnp.arange(sh[1]): # should be sh[2] ? conf_softmax[0,out_argmax[i,j],i,j]=1.0elifargs.type=='same':
conf_softmax=copy.deepcopy(out[None,:,:,:])
Second, what is the loss function? Why could conf_softmax be output_layer_grad directly?
I'm a Novice to Caffe and I try to understand your code. Could you help me with these questions?
The text was updated successfully, but these errors were encountered:
First, doesn't the code in Line 97 and Line 98 have some problems about the index?
here is the code from Line 93 to Line 101
Second, what is the loss function? Why could conf_softmax be output_layer_grad directly?
I'm a Novice to Caffe and I try to understand your code. Could you help me with these questions?
The text was updated successfully, but these errors were encountered: