Skip to content

Commit

Permalink
Update batch size and loss smoothing param to match the one from paper
Browse files Browse the repository at this point in the history
  • Loading branch information
apaszke committed Nov 11, 2016
1 parent 704453a commit 0daaae3
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion train/models/decoder.lua
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ for i = 1, #classes do
print("Class " .. tostring(i) .. " not found")
classWeights[i] = 0
else
classWeights[i] = 1 / (torch.log(1.04 + normHist[i]))
classWeights[i] = 1 / (torch.log(1.02 + normHist[i]))
end
end

Expand Down
2 changes: 1 addition & 1 deletion train/opts.lua
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ function opts.parse(arg)
-d,--learningRateDecay (default 1e-7) learning rate decay (in # samples)
-w,--weightDecay (default 2e-4) L2 penalty on the weights
-m,--momentum (default 0.9) momentum
-b,--batchSize (default 8) batch size
-b,--batchSize (default 10) batch size
--maxepoch (default 300) maximum number of training epochs
--plot plot training/testing error
--lrDecayEvery (default 100) Decay learning rate every X epoch by 1e-1
Expand Down

0 comments on commit 0daaae3

Please sign in to comment.