Skip to content

Commit

Permalink
updated readmes and exercises to reference TF and TF/Keras docs rathe…
Browse files Browse the repository at this point in the history
…r than old keras docs. Added scaffolding for RNN section
  • Loading branch information
tebba-von-mathenstein committed Jan 5, 2021
1 parent 71e4c87 commit e00c0b5
Show file tree
Hide file tree
Showing 12 changed files with 110 additions and 61 deletions.
22 changes: 11 additions & 11 deletions 01-intro-to-deep-learning/04-practice-exercise.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,23 @@
# Exercise: Neural Network Basics in Keras
# Exercise: Neural Network Basics in Tensorflow

If you've worked through the three notebooks you should be somewhat familiar with the basics of neural networks, and how they are built using Keras. Now, you should solidify your understanding of these concepts by building and training some networks of your own.
If you've worked through the three notebooks you should be somewhat familiar with the basics of neural networks, and how they are built using Tensorflow. Now, you should solidify your understanding of these concepts by building and training some networks of your own.

## Exercise Goals

This exercise is meant to help you:

* Gain familiarity with the syntax and function of the Keras library.
* Gain familiarity with the Keras documentation.
* Gain familiarity with the syntax and function of the Tensorflow library.
* Gain familiarity with the Tensorflow documentation.
* Practice using neural network terminology.
* Practice building, training, evaluating models with Keras.
* Create mental connections between deep learning theory and Keras code.
* Practice building, training, evaluating models with Tensorflow.
* Create mental connections between deep learning theory and Tensorflow code.
* Compare the performance of different neural networks.

## Exercise Notes

You may wish to use Jupyter notebooks to complete this exercise or you might prefer to write Python code and run it via the terminal, use an IDE like PyCharm, or some other technology stack. Feel free to use any technology stack and workflow you are comfortable with. Our goal is to provide an exercise that helps you learn solidify Deep Learning concepts and the details of the Keras framework—not to enforce a specific workflow, tool, or strategy for executing Python code.
You may wish to use Jupyter notebooks to complete this exercise or you might prefer to write Python code and run it via the terminal, use an IDE like PyCharm, or some other technology stack. Feel free to use any technology stack and workflow you are comfortable with. Our goal is to provide an exercise that helps you learn solidify Deep Learning concepts and the details of the Tensorflow framework—not to enforce a specific workflow, tool, or strategy for executing Python code.

This exercise should take between 30 minutes and 1 hour to complete. The provided Jupyter notebooks contain much of the information you need to complete this exercise. However, you should also expect to look up information from the Keras docs, the provided external reading material, and other sources. You are encouraged to search for information on your own.
This exercise should take between 30 minutes and 1 hour to complete. The provided Jupyter notebooks contain much of the information you need to complete this exercise. However, you should also expect to look up information from the Tensorflow docs, the provided external reading material, and other sources. You are encouraged to search for information on your own.

Finally, this is not an exam. Correct answers are not provided. In fact, the exercise has enough ambiguity that many different answers will qualify as correct. You should be able to prove the correctness of your own answers using readily available tools—and in so doing you'll have learned quite a lot.

Expand All @@ -34,7 +34,7 @@ You will build a few neural networks during this exercise, for all the networks

### Part One:

Use Keras to build a network with a single hidden layer and at least 300,000 trainable parameters. Answer the following questions about this model:
Use Tensorflow to build a network with a single hidden layer and at least 300,000 trainable parameters. Answer the following questions about this model:

* How many total trainable parameters does this model have?
* How many weights?
Expand All @@ -44,11 +44,11 @@ Use Keras to build a network with a single hidden layer and at least 300,000 tra
* How different was the model's performance on the test data?
* About how long did each epoch take?

Use Keras to build a network with a single hidden layer at fewer than 50,000 trainable parameters, then answer the same questions.
Use Tensorflow to build a network with a single hidden layer at fewer than 50,000 trainable parameters, then answer the same questions.

### Part Two:

Use Keras to build 3 networks, each with at least 10 hidden layers such that:
Use Tensorflow to build 3 networks, each with at least 10 hidden layers such that:

* The first model has fewer than 10 nodes per layer.
* The second model has between 10-50 nodes per layer.
Expand Down
5 changes: 3 additions & 2 deletions 01-intro-to-deep-learning/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Deep learning is a subset of machine learning which is itself a subset of AI. Sp
* Define key machine learning and deep learning terminology.
* Describe neural networks as computational graphs and as complex mathematical formulas.
* Describe how neural networks are trained at a high level.
* Use Keras to build, train, and evaluate simple neural networks.
* Use Tensorflow to build, train, and evaluate simple neural networks.

## Part 1: What Are Neural Networks, and Why Now?

Expand Down Expand Up @@ -40,7 +40,8 @@ Theory is great, but putting the theory into practice is more practical. In this

### Helpful Documentation

* [Keras Docs](https://keras.io/)
* [Tensorflow Main Python Docs](https://www.tensorflow.org/api_docs/python/tf/)
* [TF Keras Frontend Docs](https://www.tensorflow.org/api_docs/python/tf/keras)
* [Matplotlib docs](https://matplotlib.org/)

## Part 3: Exploring Neural Network Architectures
Expand Down
4 changes: 2 additions & 2 deletions 02-training-and-regularization-tactics/05-pratice-exercise.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,9 @@ This exercise is meant to help you:

You may wish to use Jupyter notebooks to complete this exercise or you might prefer to write Python code and run it via the terminal, use an IDE like PyCharm, or some other technology stack. Feel free to use any technology stack and workflow you are comfortable with. Our goal is to provide an exercise that helps you learn solidify Deep Learning concepts and the details of the Keras framework—not to enforce a specific workflow, tool, or strategy for executing Python code.

This exercise may take longer than 1 hour to complete, but if after 2 hours of experimentation you have not achieved a 97% validation accuracy, you may wish to move on anyway. Again, the purpose of this exercise is to become more familiar with deep learning concepts and applying them with Keras. The 97% accuracy score is an arbitrary bar provided to make the exercise more challenging and engaging.
This exercise may take longer than 1 hour to complete, but if after 2 hours of experimentation you have not achieved a 97% validation accuracy, you may wish to move on anyway. Again, the purpose of this exercise is to become more familiar with deep learning concepts and applying them with Tensorflow. The 97% accuracy score is an arbitrary bar provided to make the exercise more challenging and engaging.

The provided Jupyter notebooks contain much of the information you need to complete this exercise. However, you should also expect to look up information from the Keras docs, the provided external reading material, and other sources. You are encouraged to search for information on your own.
The provided Jupyter notebooks contain much of the information you need to complete this exercise. However, you should also expect to look up information from the Tensorflow docs, the provided external reading material, and other sources. You are encouraged to search for information on your own.

Finally, this is not an exam. Correct answers are not provided. In fact, the exercise has enough ambiguity that many different answers will qualify as correct. You should be able to prove the correctness of your own answers using readily available tools—and in so doing you'll have learned quite a lot.

Expand Down
11 changes: 5 additions & 6 deletions 02-training-and-regularization-tactics/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,7 @@ Every node in a neural network has an activation function. The primary purpose o

### Helpful documentation

* [Keras Activations](https://keras.io/activations/)
* [Keras Advanced Activations](https://keras.io/layers/advanced-activations/)
* [TF/Keras Activations](https://www.tensorflow.org/api_docs/python/tf/keras/activations)
* [ML Cheatsheet: Activation Functions](https://ml-cheatsheet.readthedocs.io/en/latest/activation_functions.html)

### Resources For Further Exploration
Expand All @@ -49,15 +48,15 @@ Loss functions are the way we quantify our network's error, and therefore how we

### Helpful Documentation

* [Keras Built In Loss Functions](https://keras.io/losses/)
* [TF/Keras Built In Loss Functions](https://www.tensorflow.org/api_docs/python/tf/keras/losses)
* [ML Cheatsheet: Loss functions](https://ml-cheatsheet.readthedocs.io/en/latest/loss_functions.html)

### Resources for Further Exploration

* [5 Regression Loss Functions All Machine Learners Should Know](https://heartbeat.fritz.ai/5-regression-loss-functions-all-machine-learners-should-know-4fb140e9d4b0)
* [How to Choose a Loss Function](https://machinelearningmastery.com/how-to-choose-loss-functions-when-training-deep-learning-neural-networks/)
* [Picking Loss Functions](https://rohanvarma.me/Loss-Functions/)
* [Building a Complex Custom Loss Function in Keras](https://towardsdatascience.com/advanced-keras-constructing-complex-custom-losses-and-metrics-c07ca130a618)
* [Building a Complex Custom Loss Function in TF/Keras](https://www.tensorflow.org/guide/keras/train_and_evaluate#custom_losses)


## Part 3: Optimizers
Expand Down Expand Up @@ -90,8 +89,8 @@ Regularization is a key aspect of any kind of statistical modeling. In general i

### Helpful Documentation

* [Keras Docs, Dropout](https://keras.io/layers/core/#dropout)
* [Keras Docs, Early Stopping](https://keras.io/callbacks/#earlystopping)
* [TF/Keras Docs, Dropout](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dropout)
* [TF/Keras Docs, Early Stopping](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/EarlyStopping)

### Resources For Further Exploration

Expand Down
4 changes: 3 additions & 1 deletion 03-data-preprocessing/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -95,12 +95,14 @@ Image augmentation is a powerful way to turn a small data set into a larger one,

### Helpful Documentation

* [Keras: Image Preprocessing](https://keras.io/preprocessing/image/)
* [TF/Keras: Image Preprocessing](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image)
* [TF Core: Image Preprocessing](https://www.tensorflow.org/api_docs/python/tf/image)
* [PIL Image Module](https://pillow.readthedocs.io/en/stable/reference/Image.html)
* [PIL ImageOps Module](https://pillow.readthedocs.io/en/stable/reference/ImageOps.html)

### Resources for Further Exploration

* [Image Augmentation for Deep Learning With Keras](https://machinelearningmastery.com/image-augmentation-deep-learning-keras/)
* Tensorflow added this same interface, so everything here works but the imports change slightly (`from tensorflow.keras` instead of `from keras`)
* [Paper: The Effectiveness of Data Augmentation in Image Classification using Deep Learning](http://cs231n.stanford.edu/reports/2017/pdfs/300.pdf)
* [Preprocessing for deep learning: from covariance matrix to image whitening](https://medium.freecodecamp.org/https-medium-com-hadrienj-preprocessing-for-deep-learning-9e2b9c75165c)
Original file line number Diff line number Diff line change
Expand Up @@ -8,30 +8,29 @@ This exercise is meant to help you:

* Read and prepare the Oxford pets dataset for processing with a Neural Network
* Implement a convolutional neural network that performs object localization.
* Implement a CNN that provides two predictions (classification and localization) by using the Keras [functional API](https://keras.io/getting-started/functional-api-guide/).
* Use a custom loss function in Keras.
* Implement a CNN that provides two predictions (classification and localization) by using the [Tensorflow functional API](https://www.tensorflow.org/guide/keras/functional).

## Exercise Notes

You may wish to use Jupyter notebooks to complete this exercise or you might prefer to write Python code and run it via the terminal, use an IDE like PyCharm, or some other technology stack. Feel free to use any technology stack and workflow you are comfortable with. Our goal is to provide an exercise that helps you learn solidify Deep Learning concepts and the details of the Keras framework—not to enforce a specific workflow, tool, or strategy for executing Python code.

This exercise should take about 1 hour of actual work, but the training time for these models can be significant. You may wish to run training sessions overnight, during your lunch break, or just plan on leaving your computer while training your networks.

The provided Jupyter notebooks contain much of the information you need to complete this exercise. However, you should also expect to look up information from the Keras docs, the provided external reading material, and other sources. You are encouraged to search for information on your own.
The provided Jupyter notebooks contain much of the information you need to complete this exercise. However, you should also expect to look up information from the Tensorflow docs, the provided external reading material, and other sources. You are encouraged to search for information on your own.

Finally, this is not an exam. Correct answers are not provided. In fact, the exercise has enough ambiguity that many different answers will qualify as correct. You should be able to prove the correctness of your own answers using readily available tools—and in so doing you'll have learned quite a lot.

## The Exercise

Your goal is to build and train a neural network that performs single object localization using the Keras framework and the Oxford Pets dataset. You can leverage much of the code from the Object Localization Jupyter notebook, and you'll have to add and modify some code as well.
Your goal is to build and train a neural network that performs single object localization using the Tensorflow framework and the Oxford Pets dataset. You can leverage much of the code from the Object Localization Jupyter notebook, and you'll have to add and modify some code as well.

### Part 1: Download the data

You can download the images and annotations for the Oxford Pets dataset from this website [https://www.robots.ox.ac.uk/~vgg/data/pets/](https://www.robots.ox.ac.uk/~vgg/data/pets/). The downloads section is near the top of the page. You will need to download both the "images" and "ground truth" datasets for this exercise.

### Part 2: Parse and prepare the data

The dataset is described extensively on the website, as well as the Object Localization notebook in this same folder. Additionally, the notebook contains Python code that parses the raw data into a format that is ready for Keras to process. For each image you should:
The dataset is described extensively on the website, as well as the Object Localization notebook in this same folder. Additionally, the notebook contains Python code that parses the raw data into a format that is ready for Tensorflow to process. For each image you should:

1. Extract the image data and turn it into a Numpy array.
1. Ensure the image is square by padding it appropriately with black pixels.
Expand All @@ -45,7 +44,7 @@ The provided notebook has code that performs all of these steps, you may wish to

### Part 3: Import and prepare a CNN

Like in previous labs and exercises, we're applying transfer learning. Import a pre-trained network from Keras with `include_top=False`. Then, using the Keras functional API, give that network two prediction heads: one for classification and one for object localization.
Like in previous labs and exercises, we're applying transfer learning. Import a pre-trained network from Tensorflow with `include_top=False`. Then, using the Tensorflow functional API, give that network two prediction heads: one for classification and one for object localization.

You'll have to decide which loss functions to use, and how to weight the predictions from each head during training at this point as well. In the notebook we used `binary_crossentropy` for the classifier, and `mse` for the localizer with weights of `1` and `800` respectively (those weights were chosen arbitrarily, but worked decently). You may wish to experiment with other options.

Expand Down
Loading

0 comments on commit e00c0b5

Please sign in to comment.