Skip to content

Tips and Tricks

Randall O'Reilly edited this page Nov 19, 2019 · 2 revisions

Leabra Tips and Tricks

Model doesn't exhibit replicable behavior under repeated testing

  • Most likely cause: the exact scaling of synaptic weight strengths depends on average activity over sending layer, and this is updated as the network runs. So even when learning is completely off, and nothing is changing, this overall weight scaling can be changing, leading typically to small differences in performance.

  • To prevent this problem, set Layer.Inhib.ActAvg.Fixed = true, and set Layer.Inhib.ActAvg.Init = <reasonable avg act> where the reasonable avg act can be obtained by looking at the last column in Layer.Pools[0] which has the ActPAvgEff value which is used for scaling, after running the model for a bit so it reflects a reasonable number.

Extending functionality and Neuron / Synapse variables

  • See leabra/pbwm and leabra/deep for extensive examples. Key principle is that you add additional slices to store any new Neuron or Synapse level variables. So all the original base-level leabra.Neuron values stay as they are, and whenever you need to access the new variables, you go through the new slice. This allows the original base code to function identically, and more cleanly partitions the new code, and aside from making everything virtual (which would make everything a lot more obscure and also slower) it is the only option.