-
Notifications
You must be signed in to change notification settings - Fork 24
Tips and Tricks
Randall O'Reilly edited this page Nov 19, 2019
·
2 revisions
Leabra Tips and Tricks
-
Most likely cause: the exact scaling of synaptic weight strengths depends on average activity over sending layer, and this is updated as the network runs. So even when learning is completely off, and nothing is changing, this overall weight scaling can be changing, leading typically to small differences in performance.
-
To prevent this problem, set
Layer.Inhib.ActAvg.Fixed = true
, and setLayer.Inhib.ActAvg.Init = <reasonable avg act>
where the reasonable avg act can be obtained by looking at the last column inLayer.Pools[0]
which has theActPAvgEff
value which is used for scaling, after running the model for a bit so it reflects a reasonable number.
- See
leabra/pbwm
andleabra/deep
for extensive examples. Key principle is that you add additional slices to store any new Neuron or Synapse level variables. So all the original base-levelleabra.Neuron
values stay as they are, and whenever you need to access the new variables, you go through the new slice. This allows the original base code to function identically, and more cleanly partitions the new code, and aside from making everything virtual (which would make everything a lot more obscure and also slower) it is the only option.