{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Setup" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import tensorflow as tf\n", "import matplotlib.pyplot as plt\n", "\n", "import helpers_05\n", "\n", "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Convolutions" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In order to give our model more spatial awareness, provide translation invariance, and reduce the amount of parameters in our network, we need to share weights in such a way that the same pixels located in different parts of a image output identical activation values. The technique we end up using is called a _convolution_.\n", "\n", "![](images/convolution_animated.gif)\n", "\n", "The defining feature of a convolution is the _kernel_ (also known as a _filter_), which is a grid-like set of weights which slides over regions of an input image. At each step, the kernel weights are multiplied with the corresponding pixel values underneath. These multiplied values are then summed to get the output value at that point.\n", "\n", "![](images/convolution_still.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "By performing the operation in this way, we can use local spatial information while also making sure that each part of the image can be examined with the same weights.\n", "\n", "This is something that is easier to learn visually, so let's take a look at a simple example:\n", "\n", "![](images/basic_kernel_anim.gif)\n", "\n", "Here, we have a 5x5 input matrix, and our kernel is 3x3. When the kernel is placed in the top-left portion of the input, we end up with the following total sum:\n", "\n", "```\n", "(1 * -1) + (2 * 1) + (0 * 2) + (1 * 1) + (0 * 1) + (0 * 0) + (2 * -1)+ (2 * 0) = -2\n", "```\n", "\n", "We can see that the value of `-2` is the top-left entry in our output matrix. As we slide the kernel across the image, we get corresponding outputs which follow the kernel spatially. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Manual Convolutions with NumPy" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's do a quick example in NumPy/TensorFlow to verify that the above illustration works. First, we'll create our input matrix and kernels in NumPy:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Example input matrix\n", "a = np.array([[1, 2, 0, 3, 1],\n", " [1, 0, 0, 2, 2],\n", " [2, 1, 2, 1, 1],\n", " [0, 0, 1, 0, 0],\n", " [1, 2, 1, 1, 1]]).reshape(1,5,5,1).astype(np.float32)\n", "\n", "# Example kernel\n", "kernel = np.array([[-1, 1, 2],\n", " [ 1, 1, 0],\n", " [-1, -2, 0]]).reshape(3,3,1,1).astype(np.float32)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can see that the values we set in our NumPy arrays match those in the visual above. A small implementation quirk is that we reshape the NumPy arrays. The input matrix goes from `[5, 5]` to `[1, 5, 5, 1]`. The values, from left to right, represent the number of elements:\n", " * in the batch (1), \n", " * the height (5), \n", " * the width (5), and \n", " * the number of channels (1). \n", " \n", "If our test matrix represented an RGB image, the shape would be `[1, 5, 5, 3]`. If there were 10 images in the batch, the shape would be `[10, 5, 5, 1]`" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The kernel goes from `[3, 3]` to `[3, 3, 1, 1]`. The values, from left to right, represent:\n", " * the height of the kernel (3), \n", " * the width of the kernel (3), \n", " * the number of channels in the _input_ (1), and \n", " * the number of channels in the output (1). \n", " \n", "If the input represented an RGB image, we'd have the shape `[3, 3, 3, 1]`, and if we wanted our output to have 5 channels, the output shape would be `[3, 3, 1, 5]`. Think of the last two components as being similar to a weight matrix on a fully connected layer, which has the shape `[prev_num_neurons, curr_num_neurons]`.\n", " \n", "Now that we have our dummy data, let's run it and see what we get! " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "out = tf.Session().run(tf.nn.conv2d(input=a,\n", " filter=kernel,\n", " strides=[1, 1, 1, 1],\n", " padding='VALID'))\n", "# np.squeeze() removes dimensions equal to `1` from a matrix/tensor\n", "# The result of tf.conv2d is four dimensional, so this cleans it up\n", "print(out.squeeze())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `tf.nn.conv2d()` Operation expects four main arguments: `input`, `filter`, `strides`, and `padding`:\n", "\n", "* `input` is the input Tensor, in this case our `[1, 5, 5, 1]` test data.\n", "* `filter` is the kernel. Typically, this would be a `tf.Variable`, but in this demo it's just our `[3,3,1,1]` kernel.\n", "\n", "The other two inputs are things we haven't talked about yet, stride and padding." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### Stride\n", "\n", "\"Stride\" refers to the number of squares we move our kernel for each step (both vertically and horizontally). In the above example, both our strides (horizontal and vertical) were 1, but if they were set to 2, the operation would look like this:\n", "\n", "![](images/basic_kernel_stride2_anim.gif)\n", "\n", "One note: we \"walk\" across using the horizontal stride (across columns) before we \"reset\" back to the left with the vertical stride (down rows). This is almost like an old-school typewriter with a *ding* when you get to the end of the line.\n", "\n", "Because we're skipping over a square in both directions, our output `Tensor` is 2x2 instead of 3x3. This is one way to reduce the spatial dimensions of a network- we'll look at another technique for this, pooling, later in this lesson.\n", "\n", "The way we set the stride in TensorFlow is with the `stride` parameter, which is a list of integers with the form `[1, vert_stride, hori_stride, 1]` (the first and last elements have to be `1`. We can test this out by modifying the code from above:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "out = tf.Session().run(tf.nn.conv2d(input=a,\n", " filter=kernel,\n", " strides=[1, 2, 2, 1],\n", " padding='VALID'))\n", "print(out.squeeze())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "##### Padding\n", "\n", "You may have noticed that we ended up reducing the spatial dimensions from our input to the output: we start with a `5x5` Tensor, and end up with a `3x3` Tensor as output. While this is sometimes ok, it's often the case that we want to maintain the width and height of out input. We can achieve this by adding zeros around our image or *padding* the image. By adding a ring of one or more zeros to the outside of our input and letting our kernel \"overflow\" on the sides, we can ensure that the output dimensions match the input dimensions.\n", "\n", "![](images/padded_kernel_anim.gif)\n", "\n", "_Side note: the amount of zeros needed on the outside of the input depends on the size of the kernel: `3x3` needs one layer of zeros, `5x5` needs two, `7x7` needs three, etc._\n", "\n", "We can control padding in TensorFlow by using the `padding` parameter. `padding` takes a string which selects between two options:\n", "\n", "* `'VALID'` is what we used above - no zero padding. The idea is that we're only using \"real\" or \"valid\" input data to get outputs. You could also think of it as the kernel not going \"out of bounds\", thus staying \"valid\"\n", "* `'SAME'` uses zero-padding to keep the output dimensions equal to that of the input (assuming horizontal and vertical stride is set to 1). This one is more self-explanatory: we're keeping the dimensions the \"same\"\n", "\n", "_Side note: this naming scheme comes from the convolution syntax in [NumPy](https://docs.scipy.org/doc/numpy/reference/generated/numpy.convolve.html) and [MATLAB](https://www.mathworks.com/help/matlab/ref/conv2.html)_\n", "\n", "Let's try out `'SAME'` padding with our dummy data:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "out = tf.Session().run(tf.nn.conv2d(input=a,\n", " filter=kernel,\n", " strides=[1, 1, 1, 1],\n", " padding='SAME'))\n", "print(out.squeeze())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Multiple layers of input and kernels" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![](images/two_layer_kernel_anim.gif)\n", "\n", "![](images/two_layer_kernel_final.png)\n", "\n", "##### Adding a Bias\n", "\n", "We will also add in a bias term for convolutions the same way we would a fully-connected layer. Each layer of the filter will have a constant number that is added along with the rest of the kernel dot product. We create a `Variable` vector with the same depth as the output `Tensor` (the last number in the shape of the kernel). Then, we use [`tf.nn.bias_add()`](https://www.tensorflow.org/versions/master/api_docs/python/nn/activation_functions_#bias_add) to add the value the output of our convolution.\n", "\n", "```\n", "conv = tf.nn.conv2d(input, kernel, ...)\n", "bias = tf.Variable(tf.constant(0.1, shape=[depth]))\n", "total = tf.nn.bias(conv, bias)\n", "```\n", "\n", "For convolutions, the bias is usually initialized to some small positive value (as opposed to zero). We do this because the general go-to activation function for CNNs is ReLU, which can \"die\" if they never get positive inputs.\n", "\n", "##### Small Aside About Kernels\n", "\n", "[Kernels](https://en.wikipedia.org/wiki/Kernel_(image_processing) have been used in image processing for many years for tasks such as edge detection, blurring, and sharpening. The basic idea of convolutional neural networks is that we can let the computer find useful kernels for the task instead of using handmade, pre-defined ones." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Pooling" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Earlier, we saw that using a higher stride convolution results in collapsing the spatial dimensions of our network. While it is sometimes advantageous to do this during trained convolutions, a more common technique is to use _pooling_, which does a transformation on neighboring inputs (but is not trained/has zero parameters)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Max Pooling" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In max pooling, the network looks at a chunk of neighboring pixels and outputs the largest pixel from that group:\n", "\n", "![Max pooling, size 2, stride 2](images/maxpool.png)\n", "\n", "In this example, we've split a `4x4` matrix into four groups (shown by colors), each holding four numbers. We take the largest number from each group to output the matrix on the right. To demonstrate this in TensorFlow, we'll use the [`tf.nn.max_pool()`](https://www.tensorflow.org/versions/master/api_docs/python/nn/pooling#max_pool) Op. Because each group is a `[2x2]` box, and there are 2 steps between boxes (horizontally and vertically), we say that this is a max pool, size 2, stride 2." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Setup our input array\n", "pool_me = np.array([[ 2, 1, 0, -1],\n", " [-3, 8, 2, 5],\n", " [ 1, -1, 3, 4],\n", " [ 0, 1, 1, -2]]).reshape(1,4,4,1).astype(np.float32)\n", "\n", "out = tf.Session().run(tf.nn.max_pool(value=pool_me,\n", " ksize=[1, 2, 2, 1],\n", " strides=[1, 2, 2, 1],\n", " padding='VALID'))\n", "print(out.squeeze())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You'll notice that max pooling has similar parameters to `tf.nn.conv2d()`. The primary difference is `ksize` instead of `filter`: `ksize` is a list of integers representing the shape of the pooling kernel with shape `[1, k_height, k_width, 1]`. In this case, it's `[1, 2, 2, 1]`, as our kernel is two squares high and two squares wide.\n", "\n", "`stride` and `padding` act the same as they do for convolutions. In general the `strides` option is the same or larger as `ksize` (no overlapping kernels), and `padding` is generally set to `'VALID'`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Average Pooling" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Average pooling is the same idea as max pooling, only instead we take the average of all the values in each group.\n", "\n", "![Average pooling, size 2, stride 2](images/avgpool.png)\n", "\n", "We can use average pooling with the [`tf.nn.avg_pool`](https://www.tensorflow.org/versions/master/api_docs/python/nn/pooling#avg_pool) Op:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "out = tf.Session().run(tf.nn.avg_pool(value=pool_me,\n", " ksize=[1, 2, 2, 1],\n", " strides=[1, 2, 2, 1],\n", " padding='VALID'))\n", "print(out.squeeze())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Max or Avg Pool?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The jury is still out as to whether or not there is a definitive answer whether max pooling or average pooling is more effective. At this time, I'd say max pooling is more common, though average pooling is used in several well-regarded modern models.\n", "\n", "You can read a thorough discussion in [_A Theoretical Analysis of Feature Pooling in Visual Recognition_ (Boureau et al)](http://people.ee.duke.edu/~lcarin/icml2010b.pdf) if you want to dive deeper into the debate!" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# LeNet" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "[FIXME: Add text.] Our goal now is to implement LeNet. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# we're going to make use a few names that we defined last week\n", "# they've been gathered in helpers_05.py\n", "from helpers_05 import (batches, flatten, fully_connected_layer,\n", " fully_connected_sigmoid_layer,\n", " test_and_show_images)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Convolution Layers" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "[FIXME: add text]" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "def conv_layer(incoming, num_kernels, kernel_sz, \n", " strides=[1, 1, 1, 1], padding='SAME',\n", " bval=0.01, \n", " activation_fn=tf.nn.relu, \n", " name=None):\n", " prev_outshape = incoming.shape.dims[-1].value\n", " kshape = kernel_sz + [prev_outshape, num_kernels]\n", "\n", " fan_in = np.prod(incoming.shape[1:]).value\n", " xavier_stddev = np.sqrt(2.0 / fan_in)\n", " \n", " with tf.variable_scope(name, 'conv_layer'):\n", " w = tf.Variable(tf.truncated_normal(kshape, stddev=xavier_stddev), name='kernel')\n", " b = tf.Variable(tf.constant(bval, shape=[num_kernels]), name='bias')\n", " conv = tf.nn.conv2d(incoming, w, strides, padding, name='conv')\n", " z = tf.nn.bias_add(conv, b)\n", " return z if activation_fn is None else activation_fn(z)" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "def pool_layer(incoming, ksize, strides=None, padding='VALID',\n", " pool_fn=tf.nn.max_pool, name=None):\n", " 'create a pooling layer: we auto-add the leading/trailing 1s'\n", " ksize = [1] + ksize + [1]\n", " # default strides to ksize\n", " strides = strides if strides is not None else ksize\n", " with tf.variable_scope(name, 'pool_layer'):\n", " return pool_fn(incoming, ksize, strides, padding)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## LeNet Sub-Networks" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To get us warmed up, here is a simple network that shares some of the architecture of LeNet, but it is smaller and it uses sigmoid activations." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def not_lenet(images):\n", " with tf.name_scope('small_model'):\n", " conv_1 = conv_layer(images, 6, [5, 5])\n", " pool_1 = pool_layer(conv_1, [2, 2])\n", " conv_2 = conv_layer(pool_1, 16, [5, 5])\n", " pool_2 = pool_layer(conv_2, [2, 2])\n", " \n", " flat = flatten(pool_2)\n", " \n", " fc_layer = fully_connected_sigmoid_layer\n", " fc_1 = fc_layer(flat, 120)\n", " fc_2 = fc_layer(fc_1, 84)\n", " fc_3 = fully_connected_layer(fc_2, 10, w_stddev = 0.5, activation_fn=None)\n", " \n", " return fc_3 " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We very briefly discussed Xavier initialization in the slides. It isn't that hard to implement, either. Here, we simply add a value to compute `w_stddev` instead of setting it to a fixed constant (see line 8 in the next cell)." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "def fully_connected_xavier_relu_layer(incoming_layer, num_nodes,\n", " b_val=0.01,\n", " keep_prob=None, name=None):\n", " ' pass through for fully_connected_layer with xavier init '\n", " incoming_layer = tf.convert_to_tensor(incoming_layer)\n", " prev_num_nodes = incoming_layer.shape.dims[-1].value\n", " \n", " w_stddev = np.sqrt(2.0 / prev_num_nodes)\n", "\n", " return fully_connected_layer(incoming_layer, num_nodes,\n", " w_stddev = w_stddev, b_val=b_val,\n", " activation_fn = tf.nn.relu,\n", " keep_prob=keep_prob,\n", " name=name)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With the above as a starting point, we can now recrete that actual LeNet. Here's the table from the slides:\n", "\n", "![](images/lenet-table.png)\n", "\n", "and the graphic from the paper:\n", "\n", "![](images/lenet.png)\n", "\n", "Unfortunately, you have to combine information from the table and the graphic to get the numbers below. The depths come from the graphic (they precede the `@` signs on the top of the figure). The other values come from the table." ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [], "source": [ "def lenet_small(images):\n", " with tf.name_scope('small_lenet_model'): \n", " conv_1 = conv_layer(images, 6, [5, 5])\n", " pool_1 = pool_layer(conv_1, [2, 2])\n", " conv_2 = conv_layer(pool_1, 16, [5, 5])\n", " pool_2 = pool_layer(conv_2, [2, 2])\n", "\n", " flat = flatten(pool_2)\n", "\n", " fc_layer = fully_connected_xavier_relu_layer\n", " fc_1 = fc_layer(flat, 120)\n", " fc_2 = fc_layer(fc_1, 84)\n", " #fc_3 = fc_layer(fc_2, 10)\n", " fc_4 = fully_connected_layer(fc_2, 10, activation_fn=None)\n", " \n", " return fc_4" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And here's a scaled up LeNet:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def lenet_big(images):\n", " with tf.name_scope('big_lenet_model'): \n", " conv_1 = conv_layer(images, 32, [5, 5])\n", " pool_1 = pool_layer(conv_1, [2, 2])\n", " conv_2 = conv_layer(pool_1, 64, [5, 5])\n", " pool_2 = pool_layer(conv_2, [2, 2])\n", "\n", " flat = flatten(pool_2)\n", "\n", " fc_layer = fully_connected_xavier_relu_layer\n", " fc_1 = fc_layer(flat, 400)\n", " fc_2 = fc_layer(fc_1, 200)\n", " fc_3 = fc_layer(fc_2, 200)\n", " fc_4 = fully_connected_layer(fc_3, 10, activation_fn=None)\n", " \n", " return fc_4" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Exercise" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You mission, should you choose to accept it, is to take the pieces we've developed and turn them into a working model for Lenet on MNIST. Here's a process to get you started:\n", "\n", " 1. Grab either your CIFAR or MNIST model from last week.\n", " 2. Modify it so that your s.out is coming from one of the LeNet versions we defined above.\n", " 3. Modify your `training` block so it uses a slightly different optimizer (we will talk about these in the coming weeks).\n", "```\n", " with tf.name_scope('train'):\n", " decayed_rate = tf.train.exponential_decay(s.learning_rate, global_step,\n", " 600, 0.998, True)\n", " momopt = tf.train.MomentumOptimizer\n", " s.train = momopt(decayed_rate, 0.9).minimize(s.loss)\n", "```\n", " 4. Read in your training and testing data with `helpers_05.get_mnist_dataset` (like we did last week)\n", " 5. Setup a training loop like we did last week. Note, use a `learning_rate=0.001` and we don't need a `momentum` (which we used last time).\n", " 6. If things are working right, you shouldn't need 15 epochs of training. Five or six should do.\n", " 7. Good luck! Feel free to consult the solution below." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Solution" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import tensorflow as tf\n", "import numpy as np\n", "import helpers_04\n", "from helpers_05 import batches, flatten, fully_connected_layer\n", "from sklearn.preprocessing import OneHotEncoder" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [], "source": [ "(train_data, train_labels, \n", " test_data, test_labels) = helpers_04.create_mnist_dataset()" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(60000,)\n" ] } ], "source": [ "print(train_labels.shape)" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [], "source": [ "enc = OneHotEncoder(categories='auto')\n", "y_train = enc.fit_transform(train_labels.reshape(-1,1))\n", "y_train = y_train.toarray()\n", "X_train = train_data" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [], "source": [ "X = tf.placeholder(tf.float32, shape=(None, 28, 28, 1), name='X')\n", "y = tf.placeholder(tf.float32, shape=(None, 10), name='y')" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [], "source": [ "outp = lenet_small(X)\n", "softmax = tf.nn.softmax(outp)" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [], "source": [ "pred = tf.arg_max(softmax, 1)\n", "correct_pred = tf.equal(pred, tf.argmax(y,1))\n", "accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [], "source": [ "with tf.name_scope('loss'):\n", " cost = tf.nn.softmax_cross_entropy_with_logits(logits=softmax, labels=y)\n", " loss = tf.reduce_mean(cost)" ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [], "source": [ "with tf.name_scope('optimizer'):\n", " learning_rate=0.01\n", " global_step = tf.Variable(0, trainable=False, name='global_step')\n", " decayed_rate = tf.train.exponential_decay(learning_rate, global_step,\n", " 600, 0.998, True)\n", " # optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)\n", " optimizer = tf.train.MomentumOptimizer(learning_rate=decayed_rate, momentum=0.9)\n", " solver = optimizer.minimize(loss)" ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "epoch: 0000 cost: 2.186300302 acc: 0.26660\n", "epoch: 0020 cost: 1.861294520 acc: 0.59472\n", "epoch: 0040 cost: 1.854376352 acc: 0.59777\n" ] } ], "source": [ "with tf.name_scope('eval'):\n", " with tf.Session() as sess:\n", " sess.run(tf.global_variables_initializer())\n", " hist_loss = []\n", " hist_acc = []\n", " x_batches = np.array_split(X_train, 10)\n", " y_batches = np.array_split(y_train, 10)\n", " for epoch in range(0,50):\n", " avg_loss = 0\n", " avg_acc = 0\n", " for b in range(0,10):\n", " batch_x = x_batches[b]\n", " batch_y = y_batches[b]\n", " _, loss_val, acc_val = sess.run([solver, loss, accuracy],\n", " feed_dict={X:batch_x, y:batch_y})\n", " avg_loss = avg_loss + loss_val\n", " avg_acc = avg_acc + acc_val\n", " avg_loss = avg_loss / 10\n", " avg_acc = avg_acc / 10\n", " hist_loss.append(avg_loss)\n", " hist_acc.append(avg_acc)\n", " if epoch % 20 == 0:\n", " print('epoch:', '%04d'%(epoch),\n", " 'cost:', '{:.9f}'.format(avg_loss),\n", " 'acc:', '{:.5f}'.format(avg_acc))\n" ] }, { "cell_type": "code", "execution_count": 30, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[]" ] }, "execution_count": 30, "metadata": {}, "output_type": "execute_result" }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAX4AAAD8CAYAAABw1c+bAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4zLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvnQurowAAHnRJREFUeJzt3X2QXNWd3vHv0y/TI82MkNCMQEgahEFrW8GA2DF+EbWxydoRjh1crtgLtWtvHLsUV+w1bHBtCH+sk2zt1qayRa1ddswSm2CXsSkqQJa4iA3B7GL8gjUCmTchWwgZCQlJI6GXkTQvPf3LH31npmfUM9OjaalHt59Plaq77z19+xzKfs6dc+89RxGBmZk1j0yjK2BmZmeXg9/MrMk4+M3MmoyD38ysyTj4zcyajIPfzKzJOPjNzJrMjMEvaZWkJyRtlfSipJurlPlDSc8l/34m6cqKfRskbZO0XdJt9W6AmZnNjmZ6gEvScmB5RDwjqQPYDHw0Il6qKPNeYGtEvCnpeuA/RcS7JGWBXwMfAHYDm4CbKr9rZmZnV26mAhGxF9ibvD8maSuwAniposzPKr7yC2Bl8v4aYHtE7ACQdB9wQ+V3q+ns7IzVq1fX3gozsya3efPmvojoqqXsjMFfSdJqYB3w9DTFPgP83+T9CmBXxb7dwLtm+p3Vq1fT29s7m6qZmTU1Sb+ttWzNwS+pHXgAuCUijk5R5v2Ug//a0U1VilUdW5K0EdgI0N3dXWu1zMxslmq6q0dSnnLo3xsRD05R5grgm8ANEXEw2bwbWFVRbCWwp9r3I+KuiOiJiJ6urpr+WjEzs9NQy109Ar5F+eLtHVOU6QYeBD4ZEb+u2LUJWCPpEkktwI3Aw3OvtpmZna5ahnrWA58Enpe0Jdl2O9ANEBF3An8OLAX+e7mfoJicvRclfQH4EZAF7o6IF+vcBjMzm4Va7up5iupj9ZVlPgt8dop9jwCPnFbtzMys7vzkrplZk3Hwm5k1mVQF/1cf/w3/+OsDja6Gmdm8lqrgv+vJHTzp4Dczm1aqgr+9kKN/oNjoapiZzWvpCv7WHP2DDn4zs+mkK/gLOY45+M3MppWq4O9ozdE/MNzoapiZzWupCv72god6zMxmkr7g98VdM7NppSv4Wz3Gb2Y2k1QFf0cy1DPTcpJmZs0sVcHf3pojAk4MjTS6KmZm81a6gr+QB/AFXjOzaaQr+FvLs0wf8wVeM7MppSr4Owrl4PcZv5nZ1FIV/KNn/L6l08xsaukK/rEzfj+9a2Y2lVQGv8f4zcymlqrg72j1GL+Z2UxSFfxtBY/xm5nNZMbgl7RK0hOStkp6UdLNVcq8TdLPJQ1K+tKkfTslPS9pi6TeelZ+snw2Q2s+4zN+M7Np5GooUwRujYhnJHUAmyU9FhEvVZQ5BHwR+OgUx3h/RPTNsa41aS/kPV+Pmdk0Zjzjj4i9EfFM8v4YsBVYManM/ojYBDT8dprynPwOfjOzqcxqjF/SamAd8PQsvhbAo5I2S9o4m987HZ6T38xserUM9QAgqR14ALglIo7O4jfWR8QeScuAxyS9HBFPVjn+RmAjQHd39ywOP1F7Iccxr8JlZjalms74JeUph/69EfHgbH4gIvYkr/uBh4Brpih3V0T0RERPV1fXbH5igvbWnO/jNzObRi139Qj4FrA1Iu6YzcEltSUXhJHUBnwQeOF0KlqrDg/1mJlNq5ahnvXAJ4HnJW1Jtt0OdANExJ2SLgR6gUVASdItwFqgE3io3HeQA74XET+sbxMmam918JuZTWfG4I+IpwDNUOYNYGWVXUeBK0+vaqdndN3diCDpcMzMrEKqntyF8hl/sRQMFkuNroqZ2byUuuDv8ERtZmbTSl3wt3uiNjOzaaUv+EfX3fUZv5lZVSkM/mSox4uxmJlVlbrg7/Dyi2Zm00pd8Ld7wXUzs2mlL/h9cdfMbFrpC37fzmlmNq3UBX8hlyGflc/4zcymkLrglzQ2bYOZmZ0qdcEPnqjNzGw66Qz+Qt5j/GZmU0hl8Jfn5PcDXGZm1aQy+D3UY2Y2tXQGvy/umplNKZ3B7zN+M7MppTL4OwpecN3MbCqpDP72Qo7BYokhr8JlZnaKdAZ/Ml/PcQ/3mJmdIp3B7xk6zcymNGPwS1ol6QlJWyW9KOnmKmXeJunnkgYlfWnSvg2StknaLum2elZ+KqNz8nuc38zsVLkayhSBWyPiGUkdwGZJj0XESxVlDgFfBD5a+UVJWeDrwAeA3cAmSQ9P+m7djS2/6DN+M7NTzHjGHxF7I+KZ5P0xYCuwYlKZ/RGxCZj8uOw1wPaI2BERQ8B9wA11qfk0xufk99O7ZmaTzWqMX9JqYB3wdI1fWQHsqvi8m0mdRsWxN0rqldR74MCB2VTrFJ6T38xsajUHv6R24AHglog4WuvXqmyLagUj4q6I6ImInq6urlqrVVWHV+EyM5tSTcEvKU859O+NiAdncfzdwKqKzyuBPbP4/mkZu6vHZ/xmZqeo5a4eAd8CtkbEHbM8/iZgjaRLJLUANwIPz76as7OwJYvkM34zs2pquatnPfBJ4HlJW5JttwPdABFxp6QLgV5gEVCSdAuwNiKOSvoC8CMgC9wdES/WuxGTja7C5TF+M7NTzRj8EfEU1cfqK8u8QXkYp9q+R4BHTqt2c1Cek9/Bb2Y2WSqf3IVkhk6f8ZuZnSK9we8zfjOzqtIb/K15jjn4zcxOkdrg7yjk6B/wk7tmZpOlNvg91GNmVl16g98Xd83Mqkpv8BdyHB8aYaRUdYYIM7OmldrgH52v5/iQz/rNzCqlNvg9X4+ZWXXpDX7P0GlmVlV6g99z8puZVZXa4Pec/GZm1aU2+MfW3fUZv5nZBOkNfq+7a2ZWVXqD32P8ZmZVpT74PcZvZjZRaoM/mxELW7Ie4zczmyS1wQ+eqM3MrJp0B39rznPym5lNkurgL8/J7+A3M6s0Y/BLWiXpCUlbJb0o6eYqZSTpq5K2S3pO0tUV+3ZKel7SFkm99W7AdNpbPdRjZjZZroYyReDWiHhGUgewWdJjEfFSRZnrgTXJv3cB30heR70/IvrqVelatRdy9B07cbZ/1sxsXpvxjD8i9kbEM8n7Y8BWYMWkYjcA34myXwCLJS2ve21nqb2Q9xm/mdkksxrjl7QaWAc8PWnXCmBXxefdjHcOATwqabOkjadXzdPT0ZrjmNfdNTOboJahHgAktQMPALdExNHJu6t8ZXTpq/URsUfSMuAxSS9HxJNVjr8R2AjQ3d1da7WmNXo7Z0QgVauimVnzqemMX1KecujfGxEPVimyG1hV8XklsAcgIkZf9wMPAddU+42IuCsieiKip6urq/YWTKO9NUcp4OTwSF2OZ2aWBrXc1SPgW8DWiLhjimIPA59K7u55N3AkIvZKaksuCCOpDfgg8EKd6j4jr8JlZnaqWoZ61gOfBJ6XtCXZdjvQDRARdwKPAB8CtgMngE8n5S4AHkqGWXLA9yLih3Wr/QxG5+Q/Nlhk2dn6UTOzeW7G4I+Ip6g+hl9ZJoDPV9m+A7jytGs3R2OLsfiM38xsTKqf3B1bjMW3dJqZjUl58HtOfjOzyVId/F5318zsVKkO/vG7evwQl5nZqFQHf5tX4TIzO0Wqg78ll6GQy3iM38ysQqqDH5L5enzGb2Y2JvXB3+7FWMzMJkh/8HsxFjOzCdIf/D7jNzOboAmCP+8xfjOzCqkP/o7WHP2Dvo/fzGxU6oPfQz1mZhOlP/hbx1fhMjOzZgj+Qo7hkWCwWGp0VczM5oXUB78najMzmyj1we/lF83MJmqe4PcZv5kZ0AzB3+rFWMzMKqU++Du8/KKZ2QSpD/72sYu7fojLzAxqCH5JqyQ9IWmrpBcl3VyljCR9VdJ2Sc9Jurpi3wZJ25J9t9W7ATPxxV0zs4lqOeMvArdGxNuBdwOfl7R2UpnrgTXJv43ANwAkZYGvJ/vXAjdV+e4ZNXo7p+frMTMrmzH4I2JvRDyTvD8GbAVWTCp2A/CdKPsFsFjScuAaYHtE7IiIIeC+pOxZU8hlyGXkM34zs8SsxvglrQbWAU9P2rUC2FXxeXeybart1Y69UVKvpN4DBw7MplrTkuQ5+c3MKtQc/JLagQeAWyLi6OTdVb4S02w/dWPEXRHRExE9XV1dtVarJp6ozcxsXK6WQpLylEP/3oh4sEqR3cCqis8rgT1AyxTbz6r2gtfdNTMbVctdPQK+BWyNiDumKPYw8Knk7p53A0ciYi+wCVgj6RJJLcCNSdmzatGCPK8dPMFIyTN0mpnVMtSzHvgkcJ2kLcm/D0n6nKTPJWUeAXYA24H/Afw7gIgoAl8AfkT5ovD9EfFivRsxkz/oWcW2fcf4uydfOds/bWY278w41BMRT1F9rL6yTACfn2LfI5Q7hob52NUr+PG2/dzx6K+59rJOrli5uJHVMTNrqNQ/uQvlO3v+6qPvoKujwM33beHEkMf7zax5NUXwA5y3MM8dn7iKnQeP8xc/eKnR1TEza5imCX6A91y6lM/900v5/i938cMX3mh0dczMGqKpgh/gT3//d3jHivO47cHn2Hd0oNHVMTM765ou+FtyGf72xqsYHC5x6/2/ouRbPM2syTRd8ANc2tXOn39kLU9t7+ObT+1odHXMzM6qpgx+gBvfuYoPrr2Av3rkZT777U28/MbkWSjMzNKpaYNfEl+9aR1/tuGtPP3qIa7/yk/49/dvYdehE42umpnZGaXys1fzS09PT/T29p613zt8Yohv/OMr3PPTnZQi+MN3XcwXrruMzvbCWauDmdlcSNocET01lXXwj3vjyABfefw33N+7i0Iuw8d/dyX/ev0lXNLZdtbrYmY2Gw7+OXrlQD9ff2I7P/jVXoZLJa576zL+zbWX8N5Ll1Kes87MbH5x8NfJ/mMD3PuL1/juL37LweNDvPWCDj69fjWf6FlFJuMOwMzmj9kEf9Ne3K3Fso5W/vQDv8NPb7uO//avriCTEbc9+Dz39+6a+ctmZvOUg78GrfksH+9ZxSNfvJaVSxbwxLb9ja6Smdlpc/DPgiSuvayTn71y0Iu6mNk5y8E/S+sv6+TYQJHndh9udFXMzE6Lg3+W1l/WCcBPt/c1uCZmZqfHwT9L57e18E8uWsRTDn4zO0c5+E/DtZd18sxvD3slLzM7Jzn4T8P6yzoZGimxaeebja6KmdmszRj8ku6WtF/SC1PsXyLpIUnPSfqlpMsr9u2U9LykLZIa/0RWnbxz9fm0ZDMe5zezc1ItZ/z3ABum2X87sCUirgA+BXxl0v73R8RVtT5Rdi5Y0JLldy9ewk9+4+A3s3PPjMEfEU8Ch6YpshZ4PCn7MrBa0gX1qd78de2aTrbuPUpf/2Cjq2JmNiv1GOP/FfAxAEnXABcDK5N9ATwqabOkjXX4rXnj2uS2zp+9crDBNTEzm516BP9fA0skbQH+BHgWGL3dZX1EXA1cD3xe0u9NdRBJGyX1Suo9cOBAHap1Zl2+4jwWteb4qYd7zOwcM+fgj4ijEfHpiLiK8hh/F/Bqsm9P8rofeAi4Zprj3BURPRHR09XVNddqnXHZjHjvpZ08tb2P+TjDqZnZVOYc/JIWS2pJPn4WeDIijkpqk9SRlGkDPghUvTPoXLV+TSevHz7Jbw96uUYzO3fkZiog6fvA+4BOSbuBLwN5gIi4E3g78B1JI8BLwGeSr14APJQsXJIDvhcRP6x3AxppdJz/qe19rPYqXWZ2jpgx+CPiphn2/xxYU2X7DuDK06/a/Ld66UJWLF7AU7/p44/efXGjq2NmVhM/uTsH49M093maZjM7Zzj452j9mk6ODhR54fUjja6KmVlNHPxz9N5LlwJ4tk4zO2c4+Oeos73A25cv8rw9ZnbOcPDXwbWXLaV355ucHBppdFXMzGbk4K+D8Wmap5vSyMxsfnDw18G7LlnqaZrN7Jzh4K+DBS1ZLl+xiGde88IsZjb/OfjrZF33Ep7bfYThkVKjq2JmNi0Hf52s617MYLHEy3uPNboqZmbTcvDXybruJQA8u8vDPWY2vzn46+Si81rp6iiw5bXDja6Kmdm0HPx1Iol1qxbz7C4Hv5nNbw7+OlrXvYRX+47z5vGhRlfFzGxKDv46Wte9GIAtPus3s3nMwV9H71hxHhnh4R4zm9cc/HXUVsjx1gsX8awf5DKzeczBX2fruhezZddhSl6YxczmKQd/nV21ajHHBors6OtvdFXMzKpy8NfZ1ckF3md9P7+ZzVMO/jp7S2c7Ha05X+A1s3lrxuCXdLek/ZJemGL/EkkPSXpO0i8lXV6xb4OkbZK2S7qtnhWfrzIZcdWqxT7jN7N5q5Yz/nuADdPsvx3YEhFXAJ8CvgIgKQt8HbgeWAvcJGntnGp7jli3ajHb3jjK8cFio6tiZnaKGYM/Ip4Epltaai3weFL2ZWC1pAuAa4DtEbEjIoaA+4Ab5l7l+W9d9xJKAc+/fqTRVTEzO0U9xvh/BXwMQNI1wMXASmAFsKui3O5kW+pdtcoXeM1s/qpH8P81sETSFuBPgGeBIqAqZae8uV3SRkm9knoPHDhQh2o1zpK2FlYvXegHucxsXsrN9QARcRT4NIAkAa8m/xYCqyqKrgT2THOcu4C7AHp6es75p5/WdS/hqe19RATl/yxmZvPDnM/4JS2W1JJ8/CzwZNIZbALWSLok2X8j8PBcf+9csa57MQeODfL64ZONroqZ2QQznvFL+j7wPqBT0m7gy0AeICLuBN4OfEfSCPAS8JlkX1HSF4AfAVng7oh48Uw0Yj5at6q8IteWXYdZuWRhg2tjZjZuxuCPiJtm2P9zYM0U+x4BHjm9qp3b3ra8g0Iuw7OvHebDV1zU6OqYmY3xk7tnSD6b4R0rzvMFXjObdxz8Z9C67sW8sOcoQ8VSo6tiZjbGwX8GretewlCxxNa9RxtdFTOzMQ7+M2j8QS4P95jZ/OHgP4OWn9fKBYsK/OyVg4x4YRYzmycc/GeQJK572zIefWkf1/7XH3PHo9vYdehEo6tlZk1OEfPvTLSnpyd6e3sbXY26GCqWeHzrPu7btIsnf3OACLj2sk4+8c5VfHDtBbTms42uopmlgKTNEdFTU1kH/9nz+uGT/K/e3dzfu4vXD59kUWuODZdfyEeuvIj3vGUpuaz/ADOz0+Pgn+dKpeCnr/Tx4DOv89hL++gfLLK0rYXr33EhH7niIt65+nwyGc/vY2a1c/CfQwaGR/iHbfv5P8/t5fGt+xgYLrGso8B7Ll3KNZeczzWrz+eyZe2e6M3MpjWb4J/z7Jw2N635LBsuX86Gy5dzfLDI/9u6j0df3MdPtx/k77eUJzNdsjBPz+pyJ3Dx0oUsbS/Q2d7C0vYCbS1ZdwpmNisO/nmkrZDjhqtWcMNVK4gIdh48waZXD/HLnYfYtPMQj72075TvFHIZOtsLXLCowMolC1m5ZEHF6wJWLFlAIecLyGY2zkM955C+/kHeODJAX/8gB/uHOHi8/Hog2b77zZPsOXySYsUzAxIsX9RK99KFXHx+G91LF7J6aRvd5y9k+eJWlra1+C8GsxTwUE9KdbYX6GwvTFtmpBS8cXSA3YdOsPvNk7x26ASvHTrBbw8e5/GX99HXPzShfEsuw/LzWrlwUSsXLV7Ahee10tVeYGl7C53J69K2AksW5n3XkVlKOPhTJpsRKxYvYMXiBbyryv7+wSKvHTzBrjdPsPfwSfYeGWDPkQHeOHKSX756iH1HByb8xTBKgkWteRYvzLN4YQuLF5TfL1nYwpKFLZzf3kJnWwvnt5WvPSxta+G8BXnfnWQ2Dzn4m0x7Icfaixax9qJFVfeXSsHRgWH6+oc42D/IwePl1wP9Qxw5McSbJ4Y5fHKYN08M8Wrfcd48McSxgWLVY+UyGvvLoaujMOF1aVsLS9uTjqKtwPltLbTk/BeF2dng4LcJMhmVz+gXtnDZsvaavjM8UuLN40P09Q9x6Pj4tYe+/kH6+gc5cGyQvv4htr1xjL7+QYZHql9X6mjNcd6CPIta83S05liUvF+0IEdHIUdb8q89+Vf+nGVhS5bWfJaFLTkW5LO05jO+bmE2DQe/zVk+m2HZolaWLWqdsWxEcOTkMAePJ51E8lfFof4hDh4f4ujJYY4ODHP0ZJFdh05wbKDIkZPD9A9W/6uiGgkW5LNJJ5BlQUv5/djr6Pvk83jHMbHcwpYcC1oy5WNUbG/NZynk3LnYucvBb2eVNP4XxaVdtX+vVApODI9wfLBI/2Bx7PXk0AgnhkY4OTwy6X0xeS1xcrhc7uTwCIdPDLF3uFxuYPQ7wyPM9ua20c5luk5jbPvkMvksrRM6oQyFXLlMaz6TvGZpzWV8Qd3OCAe/nRMyGY0N8VxQ52NHBIPF0ljnMNqJnKzoIAaS96Pbxz4PjzAwqfM50D/IyaETDAyXOJF0QAPDp7cKWy4jCrkMhaQjKCR/bRRyGVpy5Q6jJZehJZuhkC+/Ttg+WjbZ3jL5fS5DYdLnyv2F7Phxsr5QnxoOfmt6ksbOspecod8olWKswxh7HSqNdTSjncvgcImB4ujnUnlbscRgsbxvsDi+bahY/v7hk0MMFUtj2ya8H6nfsp/ZjMhnlXQM5Q4on1WVDiNb7oim6GjGO6cM+ez4tvxoB5Zsz2c1tm3sc1K+8nM+605ptmYMfkl3Ax8G9kfE5VX2nwd8F+hOjvc3EfE/k307gWPACFCs9eECs7TJZDR2cfpsigiGRiZ2CsMjFZ3DyPj20Y5isDiSvI9TvjM8MvX3Rt8fOTmcvB+pWm6qi/tzMdop5bMVHUVu/HNlZ1HumMY7jdF9LZM6lrEOKemAJhx7mk4pl5n0PpchnymXyWY0L64N1fK/wnuArwHfmWL/54GXIuIjkrqAbZLujYjRJ4XeHxF9c6+qmc2WJAq57LyatqNUKndGQyMlhkc7g2KMdRLDk/eNBMMj451OseLz+HdHKI5EUr68bXikxGBynOHkOEPFEkeTjmn0GMMjk367WKr6LEu9tGQz5LIil1HSOZQ/t2QzdHYUuP/fvueM/faoGYM/Ip6UtHq6IkCHyt1YO3AIqP0WDDNrKpmMaM1k5/UiRBEx1uFU/qVTLE3cNtqZDJdGO5igWBrdP/6+WIqxDmhoJCiOjB9rrCMrBe2Fs/PfpB5/d34NeBjYA3QAfxARowOLATwqKYC/i4i7pjqIpI3ARoDu7u46VMvM7PRIoiVXPiNvm36WlHNSPe4V++fAFuAi4Crga5JGHwtdHxFXA9cDn5f0e1MdJCLuioieiOjp6prFfX5mZjYr9Qj+TwMPRtl24FXgbQARsSd53Q88BFxTh98zM7M5qEfwvwb8MwBJFwBvBXZIapPUkWxvAz4IvFCH3zMzszmo5XbO7wPvAzol7Qa+DOQBIuJO4C+AeyQ9Dwj4DxHRJ+ktwEPJrUs54HsR8cMz0gozM6tZLXf13DTD/j2Uz+Ynb98BXHn6VTMzszPBE4GYmTUZB7+ZWZNx8JuZNZl5udi6pAPAb0/z651AM04R4XY3F7e7udTS7osjoqaHoOZl8M+FpN5mnAzO7W4ubndzqXe7PdRjZtZkHPxmZk0mjcE/5URwKed2Nxe3u7nUtd2pG+M3M7PppfGM38zMppGa4Je0QdI2Sdsl3dbo+pxJku6WtF/SCxXbzpf0mKTfJK9navnYhpC0StITkrZKelHSzcn2tLe7VdIvJf0qafd/Tranut2jJGUlPSvpB8nnZmn3TknPS9oiqTfZVre2pyL4JWWBr1Oe938tcJOktY2t1Rl1D7Bh0rbbgMcjYg3wePI5TYrArRHxduDdlNd3WEv62z0IXBcRV1Je72KDpHeT/naPuhnYWvG5WdoN5WVrr6q4jbNubU9F8FOe5397ROxI1vq9D7ihwXU6YyLiScpLXFa6Afh28v7bwEfPaqXOsIjYGxHPJO+PUQ6DFaS/3RER/cnHfPIvSHm7ASStBP4F8M2Kzalv9zTq1va0BP8KYFfF593JtmZyQUTshXJIAssaXJ8zJlkDeh3wNE3Q7mS4YwuwH3gsIpqi3cDfAn8GlCq2NUO7YXzZ2s3JsrRQx7bXY83d+UBVtvl2pRSS1A48ANwSEUeT9R5SLSJGgKskLaa8xsXlja7TmSbpw8D+iNgs6X2Nrk8DrI+IPZKWAY9JermeB0/LGf9uYFXF55WUF39vJvskLQdIXvc3uD51JylPOfTvjYgHk82pb/eoiDgM/APl6ztpb/d64F9K2kl56PY6Sd8l/e0Gply2tm5tT0vwbwLWSLpEUgtwI/Bwg+t0tj0M/HHy/o+Bv29gXepO5VP7bwFbI+KOil1pb3dXcqaPpAXA7wMvk/J2R8R/jIiVEbGa8v+ffxwRf0TK2w3lpWqnWLa2bm1PzQNckj5EeUwwC9wdEX/Z4CqdMZXLYQL7KC+H+b+B+4FuyusgfzwiJl8APmdJuhb4CfA842O+t1Me509zu6+gfCEvS/lE7f6I+C+SlpLidldKhnq+FBEfboZ2jy5bm3wcXbb2L+vZ9tQEv5mZ1SYtQz1mZlYjB7+ZWZNx8JuZNRkHv5lZk3Hwm5k1GQe/mVmTcfCbmTUZB7+ZWZP5/5t4FFRMmTlTAAAAAElFTkSuQmCC\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "from matplotlib import pyplot as plt\n", "plt.plot(hist_loss)" ] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[]" ] }, "execution_count": 31, "metadata": {}, "output_type": "execute_result" }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAX4AAAD8CAYAAABw1c+bAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4zLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvnQurowAAHJNJREFUeJzt3WuQXGd95/Hvr3tu0szIlqyRDLpYwhbYJrGMGcTFLrABUzLLRnjDBpGwkGJTKrMxBXupxbsvSGUpXrBbm0AFU1oX6wpbCfFSATvaRPhSToLBBCIJxtgyNh4LYY1kWx7rNjPSXLr7vy/69KhnpkdzZPWoZ07/PoWqz+V5up9HLn566jnnPEcRgZmZNY9coxtgZmYXl4PfzKzJOPjNzJqMg9/MrMk4+M3MmoyD38ysyTj4zcyajIPfzKzJOPjNzJpMS6MbUMvKlStjw4YNjW6GmdmisW/fvsGI6ElTdkEG/4YNG9i7d2+jm2FmtmhI+nXasp7qMTNrMg5+M7Mm4+A3M2syDn4zsyaTKvglbZX0rKR+SXfNUuZmSX2S9kv6/vnUNTOzi2fOu3ok5YG7gVuBAWCPpF0R8XRVmUuBrwNbI+IFSavS1jUzs4srzYh/C9AfEQciYhy4D9g2rczvAt+NiBcAIuLoedQ1M7OLKM19/GuAQ1X7A8Dbp5V5I9Aq6R+BbuCrEfF/UtYFQNIOYAfA+vXr07TdzDIqIigFFEtR/hPlz1IpiKrzQZD8j9JkmWQ74my5pGwk26XklbOV7VJM/d5CMSiUShRKQbEY5c/S1HJn60KpVClToliqLp/0J/ltquoGZ9tVSnaWtrdwx3uunPe/3zTBrxrHpr+otwV4K/A+YAnwT5J+nLJu+WDEPcA9AL29vX4RsC1KxVIwUSxNCZRy8CQBEVUBUBVakwGXhEYpgkKxsj81TCoBM5Gcnygm54tn61b/XiWYKuFZKFa2y8FWKgXF0tn2VgKuVIJipQ9JiE2GaYlpfTsb1IVSabIt1e2d3o7StD4ViqUp+81oVXf7ggn+AWBd1f5a4EiNMoMRMQKMSHoM2Jyyrtk5RQTjxRJnxoucHi9yZqI4uT06UWS8UGKsUGKscHZ7vFBivFiaHLlNbifhUh2sxYjJkV0xCblKOFVCaCKpP1EslbdLwUShxHgxGC8UmSiW21hcRIHVkhO5nMhL5HMiJyb3Jz+TY7mkjER5W2e3c7nyp4CWfI58TrS15FiSE635HDkp+S0mv6fyGy050ZIXLbkcLTmRz4vWXG7yXH7ytyGfy5FTeTSZy5V/j+R3JZL2Jm2vtCnpgyi3t1IWkr5V9UNisnxLLpe0q9y2fO7s35Gq+558dz4pW+lrPq/J9qhq+KsZbTn7nRf1v32KMnuATZI2AoeB7ZTn9Kv9DfA1SS1AG+XpnD8FnklR1zIiIhgrlBgeKzA8WmB4rMCp0QmGRwsMjRYYGp1geKy8fWq0fK5yfGi0XKdQNZKtHunGBeSpBK35HK050ZLPTQZKJfimfFaCKXc2oPI50dXekoRUjrZ8bjKs2lpytLeUP1vzoi2fpyVfFaRTwmdqYAFTQiufhGA5PHLkc+UAqoRKfloItSTBms+J1vzZ85WAzVWFSiXMpgeYNac5gz8iCpLuBB4C8sC9EbFf0h3J+Z0R8QtJDwI/B0rANyLiKYBadeepL1YnhWKJ46cnODYyzqsjYxwbGefYyDgnTk9w6swEJ89McGp0glNnCpw8M8HQ2MRk0E8U507o9pYc3R2tLOtoobujhe6OVi5f1kFXewttLZVgzk2GWT4nOlrzLGnNs7Qtz5K28vaStjwdrXk6WvKTAdzemqO9JU9rvhyKlWA0s7MUFzKUmie9vb3hRdrqKyJ4ZXiMw8fPMHD8DC+dHOXY6XGODY9z7PQ4x0fOfp44MzHrCHtpW55LlrSyrKOVZUtauGRJK13t5fDu6mhJtlvobGuhKwn2ZR2tdE+ea6Wtxc8NmtWbpH0R0Zum7IJcndPOz3ihxODwGEeHxnj51ChHT41Obr94cpTDx89w+MQZxgqlKfVa82L50jZWdLaxfGkb11y+jOWdrazobGdlV/n4is42LutsZ0VnG5cubaU179A2W+wc/IvA0aFR9h08Tt+hE7x8apRjpyc4cfrs9MvwWGFGnZygp7udy5d1cM3rlvH+a1ezdvkS1ly6hLXLl/K6Szvobm/xPK9ZE3LwLzARwfOvDLPn4HH2HDzGvl8f59evngagrSXH5cs6WL60lRWdbVzZ08WlS1tZvrSNlV3trF7WzuplHaxa1s5lne2e2zazmhz8C8RLJ0d5oO8w3/3pAL98eRiAyzrb6N2wnI+//Qp6Nyznza+/xPPjZnbBHPwNdHq8wEP7X+K7Pz3MD/sHiYC3XrGcL257Mzdt6mHDZUs9FWNmdefgv8gmiiV+2D/Irr4jPLz/JUbGi6xdvoTPvHcTt79lDRtXdja6iWaWcQ7+i6BUCvYcPMauJ46w+8kXOX56gmUdLXzoutfzr25Yw9s2rCDn+Xgzu0gc/PPs4f0v8Ue79vPiyVE6WnO8/5rV/Nbm1/OeN/XQ3pJvdPPMrAk5+OfZVx99jtZ8jq9uv573X7Oaznb/lZtZY/kWkXk0OlHkmZeG+JebX8e269c49M1sQXDwz6OnDp+kWAquX7e80U0xM5vk4J9HfYdOALB53SUNbomZ2VkO/nn0s0MnWHPpElZ1dzS6KWZmkxz886jvhRNcv+7SRjfDzGwKB/88eWVojMMnzjj4zWzBcfDPkyeS+f3r1zv4zWxhcfDPk75DJ8jnxG+83hd2zWxhcfDPk75DJ3jT6m6WtPnpXDNbWBz886BUCp44dMLTPGa2IDn458GBwRGGxgq+sGtmC1Kq4Je0VdKzkvol3VXj/M2STkrqS/58oercQUlPJseb4g3qlQe33uLgN7MFaM7FYyTlgbuBW4EBYI+kXRHx9LSiP4iID83yNbdExOCFNXXx6Dt0nK72Ft7Q09XoppiZzZBmxL8F6I+IAxExDtwHbJvfZi1ufYdOcN3aS/zOWzNbkNIE/xrgUNX+QHJsundKekLS9yS9uep4AA9L2idpxwW0dVEYnSjyzItDnt83swUrzTrBtYatMW3/p8AVETEs6YPAA8Cm5NyNEXFE0irgEUnPRMRjM36k/I/CDoD169en7sBCs//ISQqlcPCb2YKVZsQ/AKyr2l8LHKkuEBGnImI42d4NtEpamewfST6PAvdTnjqaISLuiYjeiOjt6ek5744sFD97IXli18FvZgtUmuDfA2yStFFSG7Ad2FVdQNLlkpRsb0m+91VJnZK6k+OdwAeAp+rZgYWm79AJXn9JB6uWeUVOM1uY5pzqiYiCpDuBh4A8cG9E7Jd0R3J+J/AR4NOSCsAZYHtEhKTVwP3JvwktwLci4sF56suC0OcHt8xsgUv1LsBk+mb3tGM7q7a/BnytRr0DwOYLbOOiMTg8xsDxM3zinVc0uilmZrPyk7t1NLkip1+1aGYLmIO/jiZX5FyzrNFNMTOblYO/jvoOneCNq7tZ2pZqBs3MrCEc/HVSKkX5wq5v4zSzBc7BXycHBkcYGi14YTYzW/Ac/HXiVy2a2WLh4K+TvkMn6GzLc6VX5DSzBc7BXyflFTkv9YqcZrbgOfjr4NToBE+/eIq3XuH7981s4XPw18FPDhyjWApuvGplo5tiZjYnB38dPN4/SEdrjhuu8IVdM1v4HPx18Hj/IG/bsIL2lnyjm2JmNicH/wV6+dQozx0d5iZP85jZIuHgv0A/er78DnnP75vZYuHgv0A/fO5Vli9t5drXeWE2M1scHPwXICJ4vH+Qd125kpzv3zezRcLBfwEODI7w0qlR3nXVZY1uiplZag7+C/B4f3l+3xd2zWwxcfBfgMf7B1m7fAnrVyxtdFPMzFJz8L9GxVLwo+df5cYrV5K8TN7MbFFw8L9GTx4+ydBogRs3eZrHzBaXVMEvaaukZyX1S7qrxvmbJZ2U1Jf8+ULauotVZX7/XVf6wq6ZLS5zvhxWUh64G7gVGAD2SNoVEU9PK/qDiPjQa6y76DzeP8jVl3ezsqu90U0xMzsvaUb8W4D+iDgQEePAfcC2lN9/IXUXrNGJInt/fdx385jZopQm+NcAh6r2B5Jj071T0hOSvifpzedZd1HZe/A444WSl2kws0VpzqkeoNYtKzFt/6fAFRExLOmDwAPAppR1yz8i7QB2AKxfvz5Fsxrnh/2DtOTElo0rGt0UM7PzlmbEPwCsq9pfCxypLhARpyJiONneDbRKWpmmbtV33BMRvRHR29PTcx5duPh+9PwgN6xfTmd7mn83zcwWljTBvwfYJGmjpDZgO7CruoCky5XczC5pS/K9r6apu9icOD3Ok4dPepkGM1u05hyyRkRB0p3AQ0AeuDci9ku6Izm/E/gI8GlJBeAMsD0iAqhZd576clH80/OvEuFlGsxs8Uo1V5FM3+yedmxn1fbXgK+lrbuYPf78IJ1teTav82sWzWxx8pO75+nx/ld5+xsuozXvvzozW5x8dTKF8UKJh/a/xF/8+Nf8anCET77zikY3yczsNXPwn8OhY6f5q39+gW/vPcTg8DjrVizh81uv5vfe4eA3s8XLwV/Dz144zp/9fT//8OxRBLz36tV8/B3refemHr9py8wWPQd/lV8NjvA/HnqG3U++xMquNj5zy1V8dMt61ly6pNFNMzOrGwc/MDg8xp89+hx/+ZMXaGvJ8dn3bWLHu9/gB7TMLJOaOtlGJ4p84wcH2Pn9A5yZKPLRt63jc+/bxKplHY1umpnZvGna4I8IPndfHw/uf4lbr13N57e+iatWdTe6WWZm865pg/+v9w3w4P6X+PzWq/n0zVc2ujlmZhdNUz6FdOjYaf74/z3N2zeuYMe739Do5piZXVRNF/zFUvAfvt2HgP/5O5vJ+/ZMM2syTTfV878ee549B4/zJ7+zmbXLlza6OWZmF11TjfifOnySP33kl3zwNy/n9rcs+heBmZm9Jk0T/KMTRf79/+1j+dI2vvTh3yR5fYCZWdNpmqme//7gszx3dJhvfmoLyzvbGt0cM7OGaYoR/w+fG+Tex3/FJ995Be9548J+raOZ2XxriuDf+f3nWbt8CXfddk2jm2Jm1nBNEfzHRsZ50+pulrTlG90UM7OGa4rgHxkv0NXRNJczzMzOqSmCf3i04JU2zcwSzRH8YwW6HfxmZkDK4Je0VdKzkvol3XWOcm+TVJT0kapjByU9KalP0t56NPp8TBRLjBVKHvGbmSXmTENJeeBu4FZgANgjaVdEPF2j3JeBh2p8zS0RMViH9p63kbECAF0OfjMzIN2IfwvQHxEHImIcuA/YVqPcZ4DvAEfr2L4LNjTq4Dczq5Ym+NcAh6r2B5JjkyStAW4HdtaoH8DDkvZJ2vFaG/pajYwnwe+7eszMgHRLNtRa1Cam7X8F+HxEFGusgXNjRByRtAp4RNIzEfHYjB8p/6OwA2D9+vUpmpXOcDLi9xy/mVlZmhH/ALCuan8tcGRamV7gPkkHgY8AX5f0YYCIOJJ8HgXupzx1NENE3BMRvRHR29NTv2UVhj3Hb2Y2RZrg3wNskrRRUhuwHdhVXSAiNkbEhojYAPw18O8i4gFJnZK6ASR1Ah8AnqprD+ZQCf5uT/WYmQEppnoioiDpTsp36+SBeyNiv6Q7kvO15vUrVgP3J9M/LcC3IuLBC292ep7qMTObKlUaRsRuYPe0YzUDPyJ+v2r7ALD5Atp3wTzVY2Y2Veaf3K0Ef6cXaDMzA5og+EfGCixpzdOSz3xXzcxSyXwaDo95gTYzs2pNEPxF39FjZlYl+8E/OkFnu+f3zcwqMh/8I2NF39FjZlYl88E/NFZw8JuZVcl88I84+M3Mpsh88PuuHjOzqZoi+L0ks5nZWZkO/vFCifFCia42B7+ZWUWmg3/ytYse8ZuZTcp08HuBNjOzmTId/H7frpnZTJkOfr9v18xspkwHv1/CYmY2U7aDv/LaRQe/mdmkpgh+j/jNzM7KdPD7dk4zs5kyHfyVu3o6/QCXmdmkTAf/yFiBpW158jk1uilmZgtGquCXtFXSs5L6Jd11jnJvk1SU9JHzrTsfvECbmdlMcwa/pDxwN3AbcC3wMUnXzlLuy8BD51t3vgyPFXxHj5nZNGlG/FuA/og4EBHjwH3AthrlPgN8Bzj6GurOC4/4zcxmShP8a4BDVfsDybFJktYAtwM7z7du1XfskLRX0t5XXnklRbPm5pewmJnNlCb4a10ZjWn7XwE+HxHF11C3fDDinojojYjenp6eFM2a29CoR/xmZtOlScUBYF3V/lrgyLQyvcB9kgBWAh+UVEhZd96MjBfo9j38ZmZTpEnFPcAmSRuBw8B24HerC0TExsq2pD8H/jYiHpDUMlfd+TQ86qkeM7Pp5kzFiChIupPy3Tp54N6I2C/pjuT89Hn9OevWp+lz88VdM7OZUqViROwGdk87VjPwI+L356p7MYwVikwUw1M9ZmbTZPbJ3cklmdvyDW6JmdnCktngHxkr32DU1dHa4JaYmS0smQ3+obEJALraPeI3M6uW2eCfHPG3e8RvZlYts8E/nIz4Oz3iNzObIsPBXx7x+64eM7Opshv8ftG6mVlNmQ3+ydcuOvjNzKbIbPAPjfm1i2ZmtWQ2+EfGCnS25cn5tYtmZlNkNviHvSSzmVlN2Q3+8QJdvqPHzGyG7Aa/l2Q2M6spu8Hv1y6amdWU2eD3+3bNzGrLbPAPearHzKymzAb/iC/umpnVlMngjwjfzmlmNotMBv9YoUShFJ7qMTOrIZPBP+x1eszMZpXJ4PcCbWZms0sV/JK2SnpWUr+ku2qc3ybp55L6JO2VdFPVuYOSnqycq2fjZzPkJZnNzGY1ZzJKygN3A7cCA8AeSbsi4umqYo8CuyIiJF0HfBu4uur8LRExWMd2n1NlxO+XsJiZzZRmxL8F6I+IAxExDtwHbKsuEBHDERHJbicQNFBljt8jfjOzmdIE/xrgUNX+QHJsCkm3S3oG+DvgU1WnAnhY0j5JO2b7EUk7kmmiva+88kq61s/CF3fNzGaXJvhrLWg/Y0QfEfdHxNXAh4EvVp26MSJuAG4D/lDSu2v9SETcExG9EdHb09OTolmzc/Cbmc0uTfAPAOuq9tcCR2YrHBGPAVdKWpnsH0k+jwL3U546mleTd/V4jt/MbIY0wb8H2CRpo6Q2YDuwq7qApKskKdm+AWgDXpXUKak7Od4JfAB4qp4dqKXyovWlrfn5/ikzs0VnziFxRBQk3Qk8BOSBeyNiv6Q7kvM7gd8GPiFpAjgDfDS5w2c1cH/yb0IL8K2IeHCe+jJpKFmZ069dNDObKdVcSETsBnZPO7azavvLwJdr1DsAbL7ANp43L8lsZja7TD65OzxWoLPd0zxmZrVkNPiLdHW0NroZZmYLUjaDf3SCLo/4zcxqymTwj4wVPcdvZjaLTAZ/eY7fwW9mVktmg7/bwW9mVlPmgj8iPOI3MzuHzAX/WKFEsRRersHMbBaZC/7KS1h8cdfMrLbMBb9fu2hmdm6ZC36/hMXM7NwyG/y+q8fMrLbsBb9ftG5mdk7ZC36/hMXM7JwyG/ye6jEzqy2zwe+pHjOz2jIX/CNjBSRY2ubVOc3Maslc8A+NFuhqayF53aOZmU2TueAfGSv4wq6Z2TlkLvi9QJuZ2bmlCn5JWyU9K6lf0l01zm+T9HNJfZL2Sropbd16G/aL1s3MzmnO4JeUB+4GbgOuBT4m6dppxR4FNkfE9cCngG+cR926cvCbmZ1bmhH/FqA/Ig5ExDhwH7CtukBEDEdEJLudQKStW28jDn4zs3NKE/xrgENV+wPJsSkk3S7pGeDvKI/6U9etp+FRz/GbmZ1LmuCvdV9kzDgQcX9EXA18GPji+dQFkLQjuT6w95VXXknRrNqGxwp0+64eM7NZpQn+AWBd1f5a4MhshSPiMeBKSSvPp25E3BMRvRHR29PTk6JZNb8juavHD2+Zmc0mTfDvATZJ2iipDdgO7KouIOkqJU9MSboBaANeTVO3nkYnSpQCutpb5+snzMwWvTnnRCKiIOlO4CEgD9wbEfsl3ZGc3wn8NvAJSRPAGeCjycXemnXnqS8MjU0A0OURv5nZrFJNhkfEbmD3tGM7q7a/DHw5bd35UlmL30/umpnNLlNP7o6MFQFP9ZiZnUumgr8y1eOLu2Zms8tU8FdG/N0e8ZuZzSpTwT/sEb+Z2ZwyFvzJHL8v7pqZzSpbwV+5q8dLNpiZzSpTwT8yViAnWNLqqR4zs9lkKvgrL2HxaxfNzGaXueDv9jSPmdk5ZSv4vSSzmdmcMhX8I+N+0bqZ2VwyFfxDo377lpnZXDIV/H7topnZ3DIV/JW7eszMbHbZCn5P9ZiZzSlTwf++a1Zx3dpLGt0MM7MFLVPD469sf0ujm2BmtuBlasRvZmZzc/CbmTUZB7+ZWZNx8JuZNZlUwS9pq6RnJfVLuqvG+d+T9PPkz48kba46d1DSk5L6JO2tZ+PNzOz8zXlXj6Q8cDdwKzAA7JG0KyKerir2K+A9EXFc0m3APcDbq87fEhGDdWy3mZm9RmlG/FuA/og4EBHjwH3AtuoCEfGjiDie7P4YWFvfZpqZWb2kCf41wKGq/YHk2Gz+LfC9qv0AHpa0T9KO82+imZnVU5oHuGq9zipqFpRuoRz8N1UdvjEijkhaBTwi6ZmIeKxG3R1A5R+GYUnPpmhbLSuBZpxWcr+bi/vdXNL0+4q0X5Ym+AeAdVX7a4Ej0wtJug74BnBbRLxaOR4RR5LPo5Lupzx1NCP4I+IeytcGLoikvRHRe6Hfs9i4383F/W4u9e53mqmePcAmSRsltQHbgV3TGrUe+C7wbyLil1XHOyV1V7aBDwBP1avxZmZ2/uYc8UdEQdKdwENAHrg3IvZLuiM5vxP4AnAZ8PXkReeF5F+n1cD9ybEW4FsR8eC89MTMzFJJtUhbROwGdk87trNq+w+AP6hR7wCwefrxeXbB00WLlPvdXNzv5lLXfiui5nVaMzPLKC/ZYGbWZDIT/HMtK5Elku6VdFTSU1XHVkh6RNJzyefyRrax3iStk/QPkn4hab+kzybHs97vDkn/LOmJpN9/nBzPdL8rJOUl/UzS3yb7zdLvGUvd1LPvmQj+qmUlbgOuBT4m6drGtmpe/Tmwddqxu4BHI2IT8GiynyUF4D9GxDXAO4A/TP4bZ73fY8B7I2IzcD2wVdI7yH6/Kz4L/KJqv1n6DeWlbq6vuo2zbn3PRPCTYlmJLEkegDs27fA24JvJ9jeBD1/URs2ziHgxIn6abA9RDoM1ZL/fERHDyW5r8ifIeL8BJK0F/gXl54MqMt/vc6hb37MS/Oe7rEQWrY6IF6EcksCqBrdn3kjaALwF+AlN0O9kuqMPOAo8EhFN0W/gK8B/BkpVx5qh31B7qZu69T0r79xNvayELW6SuoDvAJ+LiFPJMyKZFhFF4HpJl1J+LuY3Gt2m+SbpQ8DRiNgn6eZGt6cBZix1U88vz8qIP9WyEhn3sqTXASSfRxvcnrqT1Eo59P8yIr6bHM58vysi4gTwj5Sv72S93zcCvyXpIOWp2/dK+guy329g6lI3QGWpm7r1PSvBP+eyEk1gF/DJZPuTwN80sC11p/LQ/n8Dv4iIP6k6lfV+9yQjfSQtAd4PPEPG+x0R/yUi1kbEBsr/f/77iPg4Ge83nHOpm7r1PTMPcEn6IOU5wcqyEl9qcJPmjaS/Am6mvGLfy8AfAQ8A3wbWAy8A/zoipl8AXrQk3QT8AHiSs3O+/5XyPH+W+30d5Qt5ecoDtW9HxH+TdBkZ7ne1ZKrnP0XEh5qh35LeQHmUD2eXuvlSPfuemeA3M7N0sjLVY2ZmKTn4zcyajIPfzKzJOPjNzJqMg9/MrMk4+M3MmoyD38ysyTj4zcyazP8H9kCj1ioOgVUAAAAASUVORK5CYII=\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "plt.plot(hist_acc)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Optional Task" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we've got a more powerful network architecture, let's try to do better on CIFAR. Take your new and improved LeNet model and point it at the CIFAR dataset." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.3" }, "varInspector": { "cols": { "lenName": 16, "lenType": 16, "lenVar": 40 }, "kernels_config": { "python": { "delete_cmd_postfix": "", "delete_cmd_prefix": "del ", "library": "var_list.py", "varRefreshCmd": "print(var_dic_list())" }, "r": { "delete_cmd_postfix": ") ", "delete_cmd_prefix": "rm(", "library": "var_list.r", "varRefreshCmd": "cat(var_dic_list()) " } }, "types_to_exclude": [ "module", "function", "builtin_function_or_method", "instance", "_Feature" ], "window_display": false } }, "nbformat": 4, "nbformat_minor": 2 }