completed part 1 of deep learning

This commit is contained in:
2019-07-18 22:21:59 +01:00
parent 906912eb15
commit 1c8aec47c2
16 changed files with 3458 additions and 280 deletions

View File

@@ -0,0 +1,244 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Implementing the Gradient Descent Algorithm\n",
"\n",
"In this lab, we'll implement the basic functions of the Gradient Descent algorithm to find the boundary in a small dataset. First, we'll start with some functions that will help us plot and visualize the data."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"import pandas as pd\n",
"\n",
"#Some helper functions for plotting and drawing lines\n",
"\n",
"def plot_points(X, y):\n",
" admitted = X[np.argwhere(y==1)]\n",
" rejected = X[np.argwhere(y==0)]\n",
" plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'blue', edgecolor = 'k')\n",
" plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'red', edgecolor = 'k')\n",
"\n",
"def display(m, b, color='g--'):\n",
" plt.xlim(-0.05,1.05)\n",
" plt.ylim(-0.05,1.05)\n",
" x = np.arange(-10, 10, 0.1)\n",
" plt.plot(x, m*x+b, color)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Reading and plotting the data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"data = pd.read_csv('data.csv', header=None)\n",
"X = np.array(data[[0,1]])\n",
"y = np.array(data[2])\n",
"plot_points(X,y)\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## TODO: Implementing the basic functions\n",
"Here is your turn to shine. Implement the following formulas, as explained in the text.\n",
"- Sigmoid activation function\n",
"\n",
"$$\\sigma(x) = \\frac{1}{1+e^{-x}}$$\n",
"\n",
"- Output (prediction) formula\n",
"\n",
"$$\\hat{y} = \\sigma(w_1 x_1 + w_2 x_2 + b)$$\n",
"\n",
"- Error function\n",
"\n",
"$$Error(y, \\hat{y}) = - y \\log(\\hat{y}) - (1-y) \\log(1-\\hat{y})$$\n",
"\n",
"- The function that updates the weights\n",
"\n",
"$$ w_i \\longrightarrow w_i + \\alpha (y - \\hat{y}) x_i$$\n",
"\n",
"$$ b \\longrightarrow b + \\alpha (y - \\hat{y})$$"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"# Implement the following functions\n",
"\n",
"# Activation (sigmoid) function\n",
"def sigmoid(x):\n",
" pass\n",
"\n",
"# Output (prediction) formula\n",
"def output_formula(features, weights, bias):\n",
" pass\n",
"\n",
"# Error (log-loss) formula\n",
"def error_formula(y, output):\n",
" pass\n",
"\n",
"# Gradient descent step\n",
"def update_weights(x, y, weights, bias, learnrate):\n",
" pass"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Training function\n",
"This function will help us iterate the gradient descent algorithm through all the data, for a number of epochs. It will also plot the data, and some of the boundary lines obtained as we run the algorithm."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"np.random.seed(44)\n",
"\n",
"epochs = 100\n",
"learnrate = 0.01\n",
"\n",
"def train(features, targets, epochs, learnrate, graph_lines=False):\n",
" \n",
" errors = []\n",
" n_records, n_features = features.shape\n",
" last_loss = None\n",
" weights = np.random.normal(scale=1 / n_features**.5, size=n_features)\n",
" bias = 0\n",
" for e in range(epochs):\n",
" del_w = np.zeros(weights.shape)\n",
" for x, y in zip(features, targets):\n",
" output = output_formula(x, weights, bias)\n",
" error = error_formula(y, output)\n",
" weights, bias = update_weights(x, y, weights, bias, learnrate)\n",
" \n",
" # Printing out the log-loss error on the training set\n",
" out = output_formula(features, weights, bias)\n",
" loss = np.mean(error_formula(targets, out))\n",
" errors.append(loss)\n",
" if e % (epochs / 10) == 0:\n",
" print(\"\\n========== Epoch\", e,\"==========\")\n",
" if last_loss and last_loss < loss:\n",
" print(\"Train loss: \", loss, \" WARNING - Loss Increasing\")\n",
" else:\n",
" print(\"Train loss: \", loss)\n",
" last_loss = loss\n",
" predictions = out > 0.5\n",
" accuracy = np.mean(predictions == targets)\n",
" print(\"Accuracy: \", accuracy)\n",
" if graph_lines and e % (epochs / 100) == 0:\n",
" display(-weights[0]/weights[1], -bias/weights[1])\n",
" \n",
"\n",
" # Plotting the solution boundary\n",
" plt.title(\"Solution boundary\")\n",
" display(-weights[0]/weights[1], -bias/weights[1], 'black')\n",
"\n",
" # Plotting the data\n",
" plot_points(features, targets)\n",
" plt.show()\n",
"\n",
" # Plotting the error\n",
" plt.title(\"Error Plot\")\n",
" plt.xlabel('Number of epochs')\n",
" plt.ylabel('Error')\n",
" plt.plot(errors)\n",
" plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Time to train the algorithm!\n",
"When we run the function, we'll obtain the following:\n",
"- 10 updates with the current training loss and accuracy\n",
"- A plot of the data and some of the boundary lines obtained. The final one is in black. Notice how the lines get closer and closer to the best fit, as we go through more epochs.\n",
"- A plot of the error function. Notice how it decreases as we go through more epochs."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"train(X, y, epochs, learnrate, True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.3"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,59 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Solutions"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
":\n",
"# Activation (sigmoid) function\n",
"def sigmoid(x):\n",
" return 1 / (1 + np.exp(-x))\n",
"\n",
"def output_formula(features, weights, bias):\n",
" return sigmoid(np.dot(features, weights) + bias)\n",
"\n",
"def error_formula(y, output):\n",
" return - y*np.log(output) - (1 - y) * np.log(1-output)\n",
"\n",
"def update_weights(x, y, weights, bias, learnrate):\n",
" output = output_formula(x, weights, bias)\n",
" d_error = y - output\n",
" weights += learnrate * d_error * x\n",
" bias += learnrate * d_error\n",
" return weights, bias"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.3"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,100 @@
0.78051,-0.063669,1
0.28774,0.29139,1
0.40714,0.17878,1
0.2923,0.4217,1
0.50922,0.35256,1
0.27785,0.10802,1
0.27527,0.33223,1
0.43999,0.31245,1
0.33557,0.42984,1
0.23448,0.24986,1
0.0084492,0.13658,1
0.12419,0.33595,1
0.25644,0.42624,1
0.4591,0.40426,1
0.44547,0.45117,1
0.42218,0.20118,1
0.49563,0.21445,1
0.30848,0.24306,1
0.39707,0.44438,1
0.32945,0.39217,1
0.40739,0.40271,1
0.3106,0.50702,1
0.49638,0.45384,1
0.10073,0.32053,1
0.69907,0.37307,1
0.29767,0.69648,1
0.15099,0.57341,1
0.16427,0.27759,1
0.33259,0.055964,1
0.53741,0.28637,1
0.19503,0.36879,1
0.40278,0.035148,1
0.21296,0.55169,1
0.48447,0.56991,1
0.25476,0.34596,1
0.21726,0.28641,1
0.67078,0.46538,1
0.3815,0.4622,1
0.53838,0.32774,1
0.4849,0.26071,1
0.37095,0.38809,1
0.54527,0.63911,1
0.32149,0.12007,1
0.42216,0.61666,1
0.10194,0.060408,1
0.15254,0.2168,1
0.45558,0.43769,1
0.28488,0.52142,1
0.27633,0.21264,1
0.39748,0.31902,1
0.5533,1,0
0.44274,0.59205,0
0.85176,0.6612,0
0.60436,0.86605,0
0.68243,0.48301,0
1,0.76815,0
0.72989,0.8107,0
0.67377,0.77975,0
0.78761,0.58177,0
0.71442,0.7668,0
0.49379,0.54226,0
0.78974,0.74233,0
0.67905,0.60921,0
0.6642,0.72519,0
0.79396,0.56789,0
0.70758,0.76022,0
0.59421,0.61857,0
0.49364,0.56224,0
0.77707,0.35025,0
0.79785,0.76921,0
0.70876,0.96764,0
0.69176,0.60865,0
0.66408,0.92075,0
0.65973,0.66666,0
0.64574,0.56845,0
0.89639,0.7085,0
0.85476,0.63167,0
0.62091,0.80424,0
0.79057,0.56108,0
0.58935,0.71582,0
0.56846,0.7406,0
0.65912,0.71548,0
0.70938,0.74041,0
0.59154,0.62927,0
0.45829,0.4641,0
0.79982,0.74847,0
0.60974,0.54757,0
0.68127,0.86985,0
0.76694,0.64736,0
0.69048,0.83058,0
0.68122,0.96541,0
0.73229,0.64245,0
0.76145,0.60138,0
0.58985,0.86955,0
0.73145,0.74516,0
0.77029,0.7014,0
0.73156,0.71782,0
0.44556,0.57991,0
0.85275,0.85987,0
0.51912,0.62359,0
1 0.78051 -0.063669 1
2 0.28774 0.29139 1
3 0.40714 0.17878 1
4 0.2923 0.4217 1
5 0.50922 0.35256 1
6 0.27785 0.10802 1
7 0.27527 0.33223 1
8 0.43999 0.31245 1
9 0.33557 0.42984 1
10 0.23448 0.24986 1
11 0.0084492 0.13658 1
12 0.12419 0.33595 1
13 0.25644 0.42624 1
14 0.4591 0.40426 1
15 0.44547 0.45117 1
16 0.42218 0.20118 1
17 0.49563 0.21445 1
18 0.30848 0.24306 1
19 0.39707 0.44438 1
20 0.32945 0.39217 1
21 0.40739 0.40271 1
22 0.3106 0.50702 1
23 0.49638 0.45384 1
24 0.10073 0.32053 1
25 0.69907 0.37307 1
26 0.29767 0.69648 1
27 0.15099 0.57341 1
28 0.16427 0.27759 1
29 0.33259 0.055964 1
30 0.53741 0.28637 1
31 0.19503 0.36879 1
32 0.40278 0.035148 1
33 0.21296 0.55169 1
34 0.48447 0.56991 1
35 0.25476 0.34596 1
36 0.21726 0.28641 1
37 0.67078 0.46538 1
38 0.3815 0.4622 1
39 0.53838 0.32774 1
40 0.4849 0.26071 1
41 0.37095 0.38809 1
42 0.54527 0.63911 1
43 0.32149 0.12007 1
44 0.42216 0.61666 1
45 0.10194 0.060408 1
46 0.15254 0.2168 1
47 0.45558 0.43769 1
48 0.28488 0.52142 1
49 0.27633 0.21264 1
50 0.39748 0.31902 1
51 0.5533 1 0
52 0.44274 0.59205 0
53 0.85176 0.6612 0
54 0.60436 0.86605 0
55 0.68243 0.48301 0
56 1 0.76815 0
57 0.72989 0.8107 0
58 0.67377 0.77975 0
59 0.78761 0.58177 0
60 0.71442 0.7668 0
61 0.49379 0.54226 0
62 0.78974 0.74233 0
63 0.67905 0.60921 0
64 0.6642 0.72519 0
65 0.79396 0.56789 0
66 0.70758 0.76022 0
67 0.59421 0.61857 0
68 0.49364 0.56224 0
69 0.77707 0.35025 0
70 0.79785 0.76921 0
71 0.70876 0.96764 0
72 0.69176 0.60865 0
73 0.66408 0.92075 0
74 0.65973 0.66666 0
75 0.64574 0.56845 0
76 0.89639 0.7085 0
77 0.85476 0.63167 0
78 0.62091 0.80424 0
79 0.79057 0.56108 0
80 0.58935 0.71582 0
81 0.56846 0.7406 0
82 0.65912 0.71548 0
83 0.70938 0.74041 0
84 0.59154 0.62927 0
85 0.45829 0.4641 0
86 0.79982 0.74847 0
87 0.60974 0.54757 0
88 0.68127 0.86985 0
89 0.76694 0.64736 0
90 0.69048 0.83058 0
91 0.68122 0.96541 0
92 0.73229 0.64245 0
93 0.76145 0.60138 0
94 0.58985 0.86955 0
95 0.73145 0.74516 0
96 0.77029 0.7014 0
97 0.73156 0.71782 0
98 0.44556 0.57991 0
99 0.85275 0.85987 0
100 0.51912 0.62359 0

View File

@@ -0,0 +1,122 @@
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# Some helper functions for plotting and drawing lines
def plot_points(X, y):
admitted = X[np.argwhere(y == 1)]
rejected = X[np.argwhere(y == 0)]
plt.scatter([s[0][0] for s in rejected],
[s[0][1] for s in rejected], s=25,
color='blue', edgecolor='k')
plt.scatter([s[0][0] for s in admitted],
[s[0][1] for s in admitted],
s=25, color='red', edgecolor='k')
def display(m, b, color='g--'):
plt.xlim(-0.05, 1.05)
plt.ylim(-0.05, 1.05)
x = np.arange(-10, 10, 0.1)
plt.plot(x, m * x + b, color)
data = pd.read_csv('data.csv', header=None)
X = np.array(data[[0, 1]])
y = np.array(data[2])
plot_points(X, y)
plt.show()
# Implement the following functions
# Activation (sigmoid) function
def sigmoid(x):
return 1 / (1 + np.exp(-x))
# Output (prediction) formula
def output_formula(features, weights, bias):
return sigmoid(np.dot(features, weights) + bias)
# Error (log-loss) formula
def error_formula(y, output):
return -y * np.log(output) - (1 - y) * np.log(1 - output)
# Gradient descent step
def update_weights(x, y, weights, bias, learnrate):
output = output_formula(x, weights, bias)
d_error = y - output
weights += learnrate * d_error * x
bias += learnrate * d_error
return weights, bias
"""
Training function
This function will help us iterate the gradient descent algorithm through all
the data, for a number of epochs. It will also plot the data, and some of the
boundary lines obtained as we run the algorithm.
"""
np.random.seed(44)
epochs = 100
learnrate = 0.01
def train(features, targets, epochs, learnrate, graph_lines=False):
errors = []
n_records, n_features = features.shape
last_loss = None
weights = np.random.normal(scale=1 / n_features**.5, size=n_features)
bias = 0
for e in range(epochs):
del_w = np.zeros(weights.shape)
for x, y in zip(features, targets):
output = output_formula(x, weights, bias)
error = error_formula(y, output)
weights, bias = update_weights(x, y, weights, bias, learnrate)
# Printing out the log-loss error on the training set
out = output_formula(features, weights, bias)
loss = np.mean(error_formula(targets, out))
errors.append(loss)
if e % (epochs / 10) == 0:
print("\n========== Epoch", e, "==========")
if last_loss and last_loss < loss:
print("Train loss: ", loss, " WARNING - Loss Increasing")
else:
print("Train loss: ", loss)
last_loss = loss
predictions = out > 0.5
accuracy = np.mean(predictions == targets)
print("Accuracy: ", accuracy)
if graph_lines and e % (epochs / 100) == 0:
display(-weights[0] / weights[1], -bias / weights[1])
# Plotting the solution boundary
plt.title("Solution boundary")
display(-weights[0] / weights[1], -bias / weights[1], 'black')
# Plotting the data
plot_points(features, targets)
plt.show()
# Plotting the error
plt.title("Error Plot")
plt.xlabel('Number of epochs')
plt.ylabel('Error')
plt.plot(errors)
plt.show()
train(X, y, epochs, learnrate, True)