diff --git a/ch4/hebberr_combo/README.md b/ch4/hebberr_combo/README.md index aa9a921..243c28a 100644 --- a/ch4/hebberr_combo/README.md +++ b/ch4/hebberr_combo/README.md @@ -15,7 +15,7 @@ In this project we have expanded the size of all layers to accommodate a richer * For this project we have not actually trained the network with all possible combinations of horizontal and vertical lines. We deliberately left out some novel combinations of lines it has not seen together before. These can then used to test the network to see if the network can correctly generalize to these new combinations without having to memorize them. Each time a new network is run, the program automatically selects 15% (randomly) of the line combinations and put them in a Test table. -* While viewing `Act` on the netview, click `TestInit` and `TestRun`, which will step through all of the new combinations of lines the network has never seen together before. (You can also step through one at a time if you want). Look at the output patterns on the network and compare to the `Act / Targ` values which show the target (i.e. what the network should have responded). You can also switch to the `Test Trial` tab to see all the test trials and what the network guessed (shown in the second to last column, as the output activations) compared to what the correct answer would have been (the target) in the last column. To get a broader sense of the performance across multiple networks you can just click `TrainRun` and let it run through 10 networks with different initial weights and different permutations of training/test patterns. Switch to viewing the `Test Epoch Plot` tab, where you will see a graph of the network percent error on the test data across after every 5 epochs of training as each of the 10 networks is learning. (Again you can confirm that the networks are learning the training patterns by looking at `Train Epoch Plot`). +* While viewing `Act` on the netview, click `Test Init` and `Test Run`, which will step through all of the new combinations of lines the network has never seen together before. (You can also step through one at a time if you want). Look at the output patterns on the network and compare to the `Act / Targ` values which show the target (i.e. what the network should have responded). You can also switch to the `Test Trial` tab to see all the test trials and what the network guessed (shown in the second to last column, as the output activations) compared to what the correct answer would have been (the target) in the last column. To get a broader sense of the performance across multiple networks you can just click `Train Run` and let it run through 10 networks with different initial weights and different permutations of training/test patterns. Switch to viewing the `Test Epoch Plot` tab, where you will see a graph of the network percent error on the test data across after every 5 epochs of training as each of the 10 networks is learning. (Again you can confirm that the networks are learning the training patterns by looking at `Train Epoch Plot`). > **Question 4.11:** Report what you see in the output in the test trials and over epochs of learning and runs. On average, does the network generalize its learning by reporting the correct combinations of lines for these novel patterns? Consider why this might be in terms of the internal hidden representations that were learned by the hidden layer in the earlier question.