-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
1 changed file
with
268 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,268 @@ | ||
{ | ||
"cells": [ | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"# Simulated Linear Regression" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"## Abstract" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"In order to understand what TensorFlow can do, here is a little demo that makes up some phony data following a certain rulw, and then fits a line to it using Linear Regression. In the end, we expect that TensorFlow will be able to find out the parameters used to make up the phony data.\n", | ||
"\n", | ||
"Linear Regression is a Machine Learning algorithm that models the relationship between a dependent variable and one or more independent variables." | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"## Introduction" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"This tutorial is taken, with slight modification and different annotations, from [TensorFlow's official documentation](https://www.tensorflow.org/versions/r0.11/get_started/index.html) and [Professor Jordi Torres' _First Contact with TensorFlow_](http://www.jorditorres.org/first-contact-with-tensorflow/).\n", | ||
"\n", | ||
"This tutorial is intended for readers who are new to both machine learning and TensorFlow. " | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"## Data Preparation" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"Let's first start by creating 1000 phony x, y data points. In order to accomplish this, we will use NumPy. NumPy is an extension to the Python programming language, adding support for large, multi-dimensional arrays and matrices, along with a large library of high-level mathematical functions to operate on these.\n", | ||
"\n", | ||
"In particular, we will take advantage of the _numpy.random.normal()_ function, which draws random samples from a normal (Gaussian) distribution (also called the bell curve because of its characteristic shape). The normal distributions occurs often in nature. For example, it describes the commonly occurring distribution of samples influenced by a large number of tiny, random disturbances, each with its own unique distribution\n", | ||
"\n", | ||
"The rule that our phony data points will follow is:\n", | ||
"\n", | ||
"_y = x * 0.1 + 0.3_\n", | ||
"\n", | ||
"To this, we will add an \"error\" following a normal distribution." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 13, | ||
"metadata": { | ||
"collapsed": true | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"import numpy as np\n", | ||
"\n", | ||
"num_points = 1000\n", | ||
"vectors_set = []\n", | ||
"for i in range(num_points):\n", | ||
" x1= np.random.normal(0.0, 0.55)\n", | ||
" y1= x1 * 0.1 + 0.3 + np.random.normal(0.0, 0.03)\n", | ||
" vectors_set.append([x1, y1])\n", | ||
"\n", | ||
"x_data = [v[0] for v in vectors_set]\n", | ||
"y_data = [v[1] for v in vectors_set]" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"## Data Analysis" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"Linear Regression models can be represented with just two parameters: _W_ (the slope) and _b_ (the y-intercept).\n", | ||
"\n", | ||
"We want to generate a TensorFlow algorithm to find the best parameters _W_ and _b_ that from input data x_data describe the underlying rule.\n", | ||
"\n", | ||
"First, let's begin by defining two _Variable_ ops: one for the slope and one the y-intercept." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 14, | ||
"metadata": { | ||
"collapsed": true | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"import tensorflow as tf" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 15, | ||
"metadata": { | ||
"collapsed": false | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"W = tf.Variable(tf.random_uniform([1], -1.0, 1.0))\n", | ||
"b = tf.Variable(tf.zeros([1]))" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"Then, let's use two other ops for describing the relationship between x_data , _W_ and _b_, that is the linear function (first degree polynomial)." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 16, | ||
"metadata": { | ||
"collapsed": true | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"y = tf.add(tf.mul(x_data, W), b) # W * x_data + b" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"In order to find the best _W_ and _b_, we need to minimize the mean squared error between the predicted _y_ and the actual y_data. The way we accomplish this is using a Gradient Descent Optimizer." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 17, | ||
"metadata": { | ||
"collapsed": true | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"loss = tf.reduce_mean(tf.square(y - y_data))\n", | ||
"optimizer = tf.train.GradientDescentOptimizer(0.5)\n", | ||
"train = optimizer.minimize(loss)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"Before starting, initialize the variables. We will 'run' this first." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 18, | ||
"metadata": { | ||
"collapsed": true | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"init = tf.initialize_all_variables()" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"Then, we launch the graph." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 19, | ||
"metadata": { | ||
"collapsed": true | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"sess = tf.Session()\n", | ||
"sess.run(init)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"Now, fit the line. In order to do this, let's iterate 200 times (epochs) on the training data." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 20, | ||
"metadata": { | ||
"collapsed": true | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"for step in range(200):\n", | ||
" sess.run(train)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"Finally, let's see if TensorFlow learned that the best fit is near W: [0.1], b: [0.3] (because, in our example, the input data were \"phony\", contained some noise: the \"error\")" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 21, | ||
"metadata": { | ||
"collapsed": false | ||
}, | ||
"outputs": [ | ||
{ | ||
"name": "stdout", | ||
"output_type": "stream", | ||
"text": [ | ||
"[ 0.0989345] [ 0.30105537]\n" | ||
] | ||
} | ||
], | ||
"source": [ | ||
"print(sess.run(W), sess.run(b))" | ||
] | ||
} | ||
], | ||
"metadata": { | ||
"kernelspec": { | ||
"display_name": "Python 3", | ||
"language": "python", | ||
"name": "python3" | ||
}, | ||
"language_info": { | ||
"codemirror_mode": { | ||
"name": "ipython", | ||
"version": 3 | ||
}, | ||
"file_extension": ".py", | ||
"mimetype": "text/x-python", | ||
"name": "python", | ||
"nbconvert_exporter": "python", | ||
"pygments_lexer": "ipython3", | ||
"version": "3.5.2" | ||
} | ||
}, | ||
"nbformat": 4, | ||
"nbformat_minor": 1 | ||
} |