|
80 | 80 | "metadata": {},
|
81 | 81 | "source": [
|
82 | 82 | "## CMAC vs the human brain\n",
|
83 |
| - "# TODO: Finish\n", |
84 |
| - "Looking at Figure 1, we can correlate each section loosely to the human brain. However, these are loose correlations and do not accurately model the cerebellum. More importantly the model is fast as it can be viewed as a lookup table.\n", |
| 83 | + "Looking at Figure 3, we can correlate each section loosely to the human brain. However, these are loose correlations and do not accurately model the cerebellum. More importantly the model is fast as it can be viewed as a lookup table.\n", |
85 | 84 | "\n",
|
86 |
| - "#### Feature Detecting Neurons\n", |
87 |
| - "The first layer is related to sensory feature-detecting neurons.\n", |
| 85 | + "#### Feature Detecting Neurons (Mossy fibers)\n", |
| 86 | + "The Mossy fibers are one of the two input fiber systems to the cerebellum. Therefore as the first layer (L1) in Figure 3 is the input to the CMAC architecture, drawing a parallel is fairly natural.\n", |
88 | 87 | "\n",
|
89 | 88 | "#### Granule cells\n",
|
90 |
| - "The second layer in Figure 1 corresponds to granule cells.\n", |
| 89 | + "These cells are the most numerous cells in the brain ($\\approx3 \\times 10^{10}$ granule cells in the cerebellum\n", |
| 90 | + "alone). They can be said to be association cells which \"recode inputs from N inputs to at least 100N inputs\" [6]. \n", |
| 91 | + "We can relate this mechanism to the second layer in Figure 3 since in the CMAC architecture, a single feature links to multiple tilings/AUs.\n", |
91 | 92 | "\n",
|
92 | 93 | "#### Purkinje cells\n",
|
93 |
| - "The third layer in Figure 1 corresponds to Purkinje cells.\n", |
94 |
| - "\n", |
95 |
| - "### Differences from the human brain\n", |
96 |
| - "\n", |
97 |
| - "There are suggestions that the cerebellum is capable of exclusive disjunction which is not in CMAC [1]. TODO more on this." |
| 94 | + "Purkinje cells, as shown below, are highly branched neurons which connect to a number of other cells. They exist in a single layer over the cortex. They form a sort of tree structure over other cells. A parallel fiber below it “synapses with virtually every Purkinje dendritic tree it passes” [6]. The axons of these fibers are the only network exit from the cerebellar cortex. \n", |
| 95 | + "Note that this structure relates to Layer 3 fairly intuitively. As Layer 3 in our CMAC architecture connects in a tree-like fashion to the AUs, so does the Purkinje fibers connect down to the parallel synapses below it and aggregate these potentials before passing them out of the cerebellar cortex. This is an obvious simplification of the fibers, but the parallels are noticeable.\n", |
| 96 | + " [wikipedia:Purkinje cells]" |
98 | 97 | ]
|
99 | 98 | },
|
100 | 99 | {
|
|
109 | 108 | "## CNNs vs CMAC\n",
|
110 | 109 | "\n",
|
111 | 110 | "Indeed, Tile Coding bears some resemblance to both Convolutional Neural Networks and HMAX (Hierarchical Model and X) in several ways. \n",
|
112 |
| - "To demonstrate this similarity, we first look at the biological basis for CNNs/HMAX. In HMAX [6], the early layers of the visual cortex are modeled as convolutional filters which are precomputed to match those found in the visual cortex. These convolutional filters match the receptive fields found in the human brain. Receptive fields come from the foundation that each cell is activated by a small area of the visual input. In the early layers of the visual cortex, we find that the receptive fields are simple and circular (on and off depending on stimulus in one area). These cells are called simple cells. \n", |
113 |
| - "Moving forward in the visual cortex, we find more complex activation patterns (complex cells) which have receptive fields responsive to lines or curves. In CNNs, these convolutional filters representing receptive fields are learned [7]. These compare to tile coding as the tiles can be thought of as square receptive fields which aggregate the input from a set of sensory inputs. As we can see, this parallels HMAX and CNNS. " |
| 111 | + "To demonstrate this similarity, we first look at the biological basis for CNNs/HMAX. In HMAX [7], the early layers of the visual cortex are modeled as convolutional filters which are precomputed to match those found in the visual cortex. These convolutional filters match the receptive fields found in the human brain. Receptive fields come from the foundation that each cell is activated by a small area of the visual input. In the early layers of the visual cortex, we find that the receptive fields are simple and circular (on and off depending on stimulus in one area). These cells are called simple cells. \n", |
| 112 | + "Moving forward in the visual cortex, we find more complex activation patterns (complex cells) which have receptive fields responsive to lines or curves. In CNNs, these convolutional filters representing receptive fields are learned [8]. These compare to tile coding as the tiles can be thought of as square receptive fields which aggregate the input from a set of sensory inputs. As we can see, this parallels HMAX and CNNS. " |
114 | 113 | ]
|
115 | 114 | },
|
116 | 115 | {
|
|
140 | 139 | "source": [
|
141 | 140 | "# Implementation and Experiments\n",
|
142 | 141 | "Our implementation of CMAC/Tile Coding was done in TensorFlow. \n",
|
143 |
| - "Note: Our implementation is based off of [8, 9], and the plotting functions off of [9].\n", |
| 142 | + "Note: Our implementation is based off of [9, 10], and the plotting functions off of [10].\n", |
144 | 143 | "\n",
|
145 | 144 | "### Function approximation\n",
|
146 |
| - "We demonstrate our CMAC implementation on a few 2-dimensional functions and compare it to a 2-layer Multi Layer Perceptron : \n", |
147 |
| - "Note that for the MLP mean squared error graphs, each epoch is 100 training examples.\n", |
| 145 | + "We demonstrate our CMAC implementation with parameters resolution = 50 and n_AU =50 on a few 2-dimensional functions and compare it to a 2-layer Multi Layer Perceptron : \n", |
| 146 | + "Note that for the both mean squared error graphs, each epoch is 100 training examples.\n", |
| 147 | + "\n", |
148 | 148 | "- $f(\\textbf{x}) = cos(x_1) + sin(x_2)$ : \n",
|
149 | 149 | "#### CMAC\n",
|
150 | 150 | "\n",
|
|
155 | 155 | "And the associated mean squared error: \n",
|
156 | 156 | "\n",
|
157 | 157 | "\n",
|
158 |
| - "As we can observe, the MLP takes about 1100*100 = 110000 training examples to reach an MSE below 0.1 while CMAC only needs about 20." |
| 158 | + "As we can observe, the MLP takes about 1100*100 = 110000 training examples to reach an MSE below 0.1 while CMAC only needs about 20*100 = 2000.\n", |
| 159 | + "\n", |
| 160 | + "\n", |
| 161 | + "- $ \\sin (\\sqrt{( (2x_0 -4)^2)} + \\sqrt{((2x_1 - 4)^2)})$:\n", |
| 162 | + "#### CMAC\n", |
| 163 | + "\n", |
| 164 | + "And the associated mean squared error: \n", |
| 165 | + "\n", |
| 166 | + "#### MLP\n", |
| 167 | + "\n", |
| 168 | + "And the associated mean squared error: \n", |
| 169 | + "" |
159 | 170 | ]
|
160 | 171 | },
|
161 | 172 | {
|
|
174 | 185 | "\n",
|
175 | 186 | "[5] J.S. Albus (1975). \"A New Approach to Manipulator Control: the Cerebellar Model Articulation Controller (CMAC)\". In: Trans. ASME, Series G. Journal of Dynamic Systems, Measurement and Control, Vol. 97, pp. 220–233, 1975.\n",
|
176 | 187 | "\n",
|
177 |
| - "[6] Henderson, Peter, and Dhirendra Singh (2013). \"Biologically Motivated Object Recognition.”\n", |
| 188 | + "[6] J.S. Albus (1978). \"A theory of cerebellar function\"\n", |
| 189 | + "\n", |
| 190 | + "[7] Henderson, Peter, and Dhirendra Singh (2013). \"Biologically Motivated Object Recognition.”\n", |
178 | 191 | "\n",
|
179 |
| - "[7] Rashad, https://www.quora.com/What-is-a-receptive-field-in-a-convolutional-neural-network\n", |
| 192 | + "[8] Rashad, https://www.quora.com/What-is-a-receptive-field-in-a-convolutional-neural-network\n", |
180 | 193 | "\n",
|
181 |
| - "[8] Stober, 2012. https://gist.github.com/stober/1792732\n", |
| 194 | + "[9] Stober, 2012. https://gist.github.com/stober/1792732\n", |
182 | 195 | "\n",
|
183 |
| - "[9] Pezeshki, 2017. https://github.com/mohammadpz/Theano_Tile_Coding" |
| 196 | + "[10] Pezeshki, 2017. https://github.com/mohammadpz/Theano_Tile_Coding" |
184 | 197 | ]
|
185 | 198 | }
|
186 | 199 | ],
|
|
0 commit comments