The standard way to avoid overfitting is called L2 regularization. It consists of appropriately modifying your cost function, from: J = -\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} \tag{1} To: J_{regularized} = \small \underbrace{-\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} }_\text{cross-entropy cost} + \underbrace{\frac{1}{m} \frac{\lambda}{2} \sum\limits_l\sum\limits_k\sum\limits_j W_{k,j}^{[l]2} }_\text{L2 regularization cost} \tag{2}
Let's modify your cost and observe the consequences.
Exercise: Implement compute_cost_with_regularization() which computes the cost given by formula (2). To calculate $\sum\limitsk\sum\limits_j W{k,j}^{[l]2}$ , use :
np.sum(np.square(Wl))
defmodel(X,Y,learning_rate=0.3,num_iterations=30000,print_cost=True,lambd=0,keep_prob=1):""" Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID. Arguments: X -- input data, of shape (input size, number of examples) Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (output size, number of examples) learning_rate -- learning rate of the optimization num_iterations -- number of iterations of the optimization loop print_cost -- If True, print the cost every 10000 iterations lambd -- regularization hyperparameter, scalar keep_prob - probability of keeping a neuron active during drop-out, scalar. Returns: parameters -- parameters learned by the model. They can then be used to predict. """ grads ={} costs = [] # to keep track of the cost m = X.shape[1]# number of examples layers_dims = [X.shape[0],20,3,1]# Initialize parameters dictionary. parameters =initialize_parameters(layers_dims)# Loop (gradient descent)for i inrange(0, num_iterations):# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.if keep_prob ==1: a3, cache =forward_propagation(X, parameters)elif keep_prob <1: a3, cache =forward_propagation_with_dropout(X, parameters, keep_prob)# Cost functionif lambd ==0: cost =compute_cost(a3, Y)else: cost =compute_cost_with_regularization(a3, Y, parameters, lambd)# Backward propagation.assert(lambd==0or keep_prob==1) # it is possible to use both L2 regularization and dropout, # but this assignment will only explore one at a timeif lambd ==0and keep_prob ==1: grads =backward_propagation(X, Y, cache)elif lambd !=0: grads =backward_propagation_with_regularization(X, Y, cache, lambd)elif keep_prob <1: grads =backward_propagation_with_dropout(X, Y, cache, keep_prob)# Update parameters. parameters =update_parameters(parameters, grads, learning_rate)# Print the loss every 10000 iterationsif print_cost and i %10000==0:print("Cost after iteration {}: {}".format(i, cost))if print_cost and i %1000==0: costs.append(cost)# plot the cost plt.plot(costs) plt.ylabel('cost') plt.xlabel('iterations (x1,000)') plt.title("Learning rate ="+str(learning_rate)) plt.show()return parameters
L2 Regularization Backpropagation
Observations:
The value of $\lambda$ is a hyperparameter that you can tune using a dev set.
L2 regularization makes your decision boundary smoother. If $\lambda$ is too large, it is also possible to "oversmooth", resulting in a model with high bias.
What is L2-regularization actually doing?:
L2-regularization relies on the assumption that a model with small weights is simpler than a model with large weights. Thus, by penalizing the square values of the weights in the cost function you drive all the weights to smaller values. It becomes too costly for the cost to have large weights! This leads to a smoother model in which the output changes more slowly as the input changes.
What you should remember -- the implications of L2-regularization on:
The cost computation:
A regularization term is added to the cost
The backpropagation function:
There are extra terms in the gradients with respect to weight matrices
Weights end up smaller ("weight decay"):
Weights are pushed to smaller values.
# GRADED FUNCTION: backward_propagation_with_regularizationdefbackward_propagation_with_regularization(X,Y,cache,lambd):""" Implements the backward propagation of our baseline model to which we added an L2 regularization. Arguments: X -- input dataset, of shape (input size, number of examples) Y -- "true" labels vector, of shape (output size, number of examples) cache -- cache output from forward_propagation() lambd -- regularization hyperparameter, scalar Returns: gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
""" m = X.shape[1] (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache dZ3 = A3 - Y### START CODE HERE ### (approx. 1 line) dW3 =1./m * np.dot(dZ3, A2.T)+ (lambd/m)*(W3)### END CODE HERE ### db3 =1./m * np.sum(dZ3, axis=1, keepdims =True) dA2 = np.dot(W3.T, dZ3) dZ2 = np.multiply(dA2, np.int64(A2 >0))### START CODE HERE ### (approx. 1 line) dW2 =1./m * np.dot(dZ2, A1.T)+ (lambd/m)*(W2)### END CODE HERE ### db2 =1./m * np.sum(dZ2, axis=1, keepdims =True) dA1 = np.dot(W2.T, dZ2) dZ1 = np.multiply(dA1, np.int64(A1 >0))### START CODE HERE ### (approx. 1 line) dW1 =1./m * np.dot(dZ1, X.T)+ (lambd/m)*(W1)### END CODE HERE ### db1 =1./m * np.sum(dZ1, axis=1, keepdims =True) gradients ={"dZ3": dZ3,"dW3": dW3,"db3": db3,"dA2": dA2,"dZ2": dZ2,"dW2": dW2,"db2": db2,"dA1": dA1,"dZ1": dZ1,"dW1": dW1,"db1": db1}return gradients
3 - Dropout
Finally, dropout is a widely used regularization technique that is specific to deep learning. It randomly shuts down some neurons in each iteration. Watch these two videos to see what this means!
Figure 2 : Drop-out on the second hidden layer.
At each iteration, you shut down (= set to zero) each neuron of a layer with probability $1 - keep\_prob$ or keep it with probability $keep\_prob$ (50% here). The dropped neurons don't contribute to the training in both the forward and backward propagations of the iteration.
Figure 3 : Drop-out on the first and third hidden layers.
$1^{st}$ layer: we shut down on average 40% of the neurons. $3^{rd}$ layer: we shut down on average 20% of the neurons.
When you shut some neurons down, you actually modify your model. The idea behind drop-out is that at each iteration, you train a different model that uses only a subset of your neurons. With dropout, your neurons thus become less sensitive to the activation of one other specific neuron, because that other neuron might be shut down at any time.
3.1 - Forward propagation with dropout
Exercise: Implement the forward propagation with dropout. You are using a 3 layer neural network, and will add dropout to the first and second hidden layers. We will not apply dropout to the input layer or output layer.
Instructions: You would like to shut down some neurons in the first and second layers. To do that, you are going to carry out 4 Steps: 1. In lecture, we dicussed creating a variable $d^{[1]}$ with the same shape as $a^{[1]}$ using np.random.rand() to randomly get numbers between 0 and 1. Here, you will use a vectorized implementation, so create a random matrix $D^{[1]} = [d^{1} d^{1} ... d^{1}] $ of the same dimension as $A^{[1]}$. 2. Set each entry of $D^{[1]}$ to be 1 with probability (keep_prob), and 0 otherwise.
Hint: Let's say that keep_prob = 0.8, which means that we want to keep about 80% of the neurons and drop out about 20% of them. We want to generate a vector that has 1's and 0's, where about 80% of them are 1 and about 20% are 0. This python statement:
X = (X < keep_prob).astype(int)
is conceptually the same as this if-else statement (for the simple case of a one-dimensional array) :
for i,v in enumerate(x):
if v < keep_prob:
x[i] = 1
else: # v >= keep_prob
x[i] = 0
Note that the X = (X < keep_prob).astype(int) works with multi-dimensional arrays, and the resulting output preserves the dimensions of the input array.
Also note that without using .astype(int), the result is an array of booleans True and False, which Python automatically converts to 1 and 0 if we multiply it with numbers. (However, it's better practice to convert data into the data type that we intend, so try using .astype(int).)
Set $A^{[1]}$ to $A^{[1]} * D^{[1]}$. (You are shutting down some neurons). You can think of $D^{[1]}$ as a mask, so that when it is multiplied with another matrix, it shuts down some of the values.
Divide $A^{[1]}$ by keep_prob. By doing this you are assuring that the result of the cost will still have the same expected value as without drop-out. (This technique is also called inverted dropout.)
# GRADED FUNCTION: forward_propagation_with_dropoutdefforward_propagation_with_dropout(X,parameters,keep_prob=0.5):""" Implements the forward propagation: LINEAR -> RELU + DROPOUT -> LINEAR -> RELU + DROPOUT -> LINEAR -> SIGMOID. Arguments: X -- input dataset, of shape (2, number of examples) parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3": W1 -- weight matrix of shape (20, 2) b1 -- bias vector of shape (20, 1) W2 -- weight matrix of shape (3, 20) b2 -- bias vector of shape (3, 1) W3 -- weight matrix of shape (1, 3) b3 -- bias vector of shape (1, 1) keep_prob - probability of keeping a neuron active during drop-out, scalar Returns: A3 -- last activation value, output of the forward propagation, of shape (1,1) cache -- tuple, information stored for computing the backward propagation """ np.random.seed(1)# retrieve parameters W1 = parameters["W1"] b1 = parameters["b1"] W2 = parameters["W2"] b2 = parameters["b2"] W3 = parameters["W3"] b3 = parameters["b3"]# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID Z1 = np.dot(W1, X)+ b1 A1 =relu(Z1)### START CODE HERE ### (approx. 4 lines) # Steps 1-4 below correspond to the Steps 1-4 described above. D1 = np.random.rand(W1.shape[0],X.shape[1])# Step 1: initialize matrix D1 = np.random.rand(..., ...) D1 = D1 < keep_prob # Step 2: convert entries of D1 to 0 or 1 (using keep_prob as the threshold)
A1 = np.multiply(A1, D1)# Step 3: shut down some neurons of A1 A1 = A1 / keep_prob # Step 4: scale the value of neurons that haven't been shut down### END CODE HERE ### Z2 = np.dot(W2, A1)+ b2 A2 =relu(Z2)### START CODE HERE ### (approx. 4 lines) D2 = np.random.rand(W2.shape[0],A1.shape[1]) # Step 1: initialize matrix D2 = np.random.rand(..., ...)
D2 = D2 < keep_prob # Step 2: convert entries of D2 to 0 or 1 (using keep_prob as the threshold)
A2 = np.multiply(A2, D2)# Step 3: shut down some neurons of A2 A2 = A2 / keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ### Z3 = np.dot(W3, A2)+ b3 A3 =sigmoid(Z3) cache = (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3)return A3, cache
# GRADED FUNCTION: backward_propagation_with_dropoutdefbackward_propagation_with_dropout(X,Y,cache,keep_prob):""" Implements the backward propagation of our baseline model to which we added dropout. Arguments: X -- input dataset, of shape (2, number of examples) Y -- "true" labels vector, of shape (output size, number of examples) cache -- cache output from forward_propagation_with_dropout() keep_prob - probability of keeping a neuron active during drop-out, scalar Returns: gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
""" m = X.shape[1] (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) = cache dZ3 = A3 - Y dW3 =1./m * np.dot(dZ3, A2.T) db3 =1./m * np.sum(dZ3, axis=1, keepdims =True) dA2 = np.dot(W3.T, dZ3)### START CODE HERE ### (≈ 2 lines of code) dA2 = dA2 * D2# Step 1: Apply mask D2 to shut down the same neurons as during the forward propagation dA2 = dA2 / keep_prob # Step 2: Scale the value of neurons that haven't been shut down### END CODE HERE ### dZ2 = np.multiply(dA2, np.int64(A2 >0)) dW2 =1./m * np.dot(dZ2, A1.T) db2 =1./m * np.sum(dZ2, axis=1, keepdims =True) dA1 = np.dot(W2.T, dZ2)### START CODE HERE ### (≈ 2 lines of code) dA1 = dA1 * D1 # Step 1: Apply mask D1 to shut down the same neurons as during the forward propagation
dA1 = dA1 / keep_prob # Step 2: Scale the value of neurons that haven't been shut down### END CODE HERE ### dZ1 = np.multiply(dA1, np.int64(A1 >0)) dW1 =1./m * np.dot(dZ1, X.T) db1 =1./m * np.sum(dZ1, axis=1, keepdims =True) gradients ={"dZ3": dZ3,"dW3": dW3,"db3": db3,"dA2": dA2,"dZ2": dZ2,"dW2": dW2,"db2": db2,"dA1": dA1,"dZ1": dZ1,"dW1": dW1,"db1": db1}return gradients
Note:
A common mistake when using dropout is to use it both in training and testing. You should use dropout (randomly eliminate nodes) only in training.
Deep learning frameworks like tensorflow, PaddlePaddle, keras or caffe come with a dropout layer implementation. Don't stress - you will soon learn some of these frameworks.
What you should remember about dropout:
Dropout is a regularization technique.
You only use dropout during training. Don't use dropout (randomly eliminate nodes) during test time.
Apply dropout both during forward and backward propagation.
During training time, divide each dropout layer by keep_prob to keep the same expected value for the activations. For example, if keep_prob is 0.5, then we will on average shut down half the nodes, so the output will be scaled by 0.5 since only the remaining half are contributing to the solution. Dividing by 0.5 is equivalent to multiplying by 2. Hence, the output now has the same expected value. You can check that this works even when keep_prob is other values than 0.5.
What we want you to remember from this notebook:
Regularization will help you reduce overfitting.
Regularization will drive your weights to lower values.
L2 regularization and Dropout are two very effective regularization techniques.