# Part 1：Python Basics with Numpy (optional assignment)

## 1 - Building basic functions with numpy

Numpy is the main package for scientific computing in Python. It is maintained by a large community (www.numpy.org). In this exercise you will learn several key numpy functions such as np.exp, np.log, and np.reshape. You will need to know how to use these functions for future assignments.

### 1.1 - sigmoid function, np.exp()

Exercise: Build a function that returns the sigmoid of a real number x. Use math.exp(x) for the exponential function.

Reminder
sigmoid(x)=11+ex  is sometimes also known as the logistic function. It is a non-linear function used not only in Machine Learning (Logistic Regression), but also in Deep Learning. To refer to a function belonging to a specific package you could call it using package_name.function(). Run the code below to see an example with math.exp().

# GRADED FUNCTION: basic_sigmoid

import math

def basic_sigmoid(x):
""" Compute sigmoid of x. Arguments: x -- A scalar Return: s -- sigmoid(x) """

### START CODE HERE ### (≈ 1 line of code)
s = 1.0 / (1 + 1/ math.exp(x))
### END CODE HERE ###

return s

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20


basic_sigmoid(3)

1


0.9525741268224334


1

2



Actually, we rarely use the “math” library in deep learning because the inputs of the functions are real numbers. In deep learning we mostly use matrices and vectors. This is why numpy is more useful.

### One reason why we use "numpy" instead of "math" in Deep Learning ###
x = [1, 2, 3]
basic_sigmoid(x) # you will see this give an error when you run it, because x is a vector.

1

2

3


---------------------------------------------------------------------------

TypeError                                 Traceback (most recent call last)

<ipython-input-3-2e11097d6860> in <module>()
1 ### One reason why we use "numpy" instead of "math" in Deep Learning ###
2 x = [1, 2, 3]
----> 3 basic_sigmoid(x) # you will see this give an error when you run it, because x is a vector.

<ipython-input-1-65a96864f65f> in basic_sigmoid(x)
15
16     ### START CODE HERE ### (≈ 1 line of code)
---> 17     s = 1.0 / (1 + 1/ math.exp(x))
18     ### END CODE HERE ###
19

TypeError: a float is required


1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20



In fact, if  x=(x1,x2,...,xn)  is a row vector then  np.exp(x)  will apply the exponential function to every element of x. The output will thus be:  np.exp(x)=(ex1,ex2,...,exn)

import numpy as np

# example of np.exp
x = np.array([1, 2, 3])
print(np.exp(x)) # result is (exp(1), exp(2), exp(3))

1

2

3

4

5


[  2.71828183   7.3890561   20.08553692]


1

2



Furthermore, if x is a vector, then a Python operation such as  s=x+3  or  s=1x  will output s as a vector of the same size as x.

# example of vector operation
x = np.array([1, 2, 3])
print (x + 3)

1

2

3


[4 5 6]


1

2



Exercise: Implement the sigmoid function using numpy.

Instructions: x could now be either a real number, a vector, or a matrix. The data structures we use in numpy to represent these shapes (vectors, matrices…) are called numpy arrays. You don’t need to know more for now.

For xRnsigmoid(x)=sigmoidx1x2...xn=11+ex111+ex2...11+exn(1)

# GRADED FUNCTION: sigmoid

import numpy as np # this means you can access numpy functions by writing np.function() instead of numpy.function()

def sigmoid(x):
""" Compute the sigmoid of x Arguments: x -- A scalar or numpy array of any size Return: s -- sigmoid(x) """

### START CODE HERE ### (≈ 1 line of code)
s = 1.0 / (1 + 1 / np.exp(x))
### END CODE HERE ###

return s

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20


x = np.array([1, 2, 3])
sigmoid(x)

1

2


array([ 0.73105858,  0.88079708,  0.95257413])


1

2



Exercise: Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. The formula is:

sigmoid_derivative(x)=σ(x)=σ(x)(1σ(x))(2)

You often code this function in two steps:
1. Set s to be the sigmoid of x. You might find your sigmoid(x) function useful.
2. Compute  σ(x)=s(1s)

# GRADED FUNCTION: sigmoid_derivative

def sigmoid_derivative(x):
""" Compute the gradient (also called the slope or derivative) of the sigmoid function with respect to its input x. You can store the output of the sigmoid function into variables and then use it to calculate the gradient. Arguments: x -- A scalar or numpy array Return: ds -- Your computed gradient. """

### START CODE HERE ### (≈ 2 lines of code)
s = 1.0 / (1 + 1 / np.exp(x))
ds = s * (1 - s)
### END CODE HERE ###

return ds

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20


x = np.array([1, 2, 3])
print ("sigmoid_derivative(x) = " + str(sigmoid_derivative(x)))

1

2


sigmoid_derivative(x) = [ 0.19661193  0.10499359  0.04517666]


1

2



### 1.3 - Reshaping arrays

Two common numpy functions used in deep learning are np.shape and np.reshape()
- X.shape is used to get the shape (dimension) of a matrix/vector X.
- X.reshape(…) is used to reshape X into some other dimension.

For example, in computer science, an image is represented by a 3D array of shape  (length,height,depth=3) . However, when you read an image as the input of an algorithm you convert it to a vector of shape  (lengthheight3,1) . In other words, you “unroll”, or reshape, the 3D array into a 1D vector. Exercise: Implement image2vector() that takes an input of shape (length, height, 3) and returns a vector of shape (length*height*3, 1). For example, if you would like to reshape an array v of shape (a, b, c) into a vector of shape (a*b,c) you would do:

v = v.reshape((v.shape*v.shape, v.shape)) # v.shape = a ; v.shape = b ; v.shape = c

1


• Please don’t hardcode the dimensions of image as a constant. Instead look up the quantities you need with image.shape, etc.
# GRADED FUNCTION: image2vector
def image2vector(image):
""" Argument: image -- a numpy array of shape (length, height, depth) Returns: v -- a vector of shape (length*height*depth, 1) """

### START CODE HERE ### (≈ 1 line of code)
v = image.reshape((image.shape * image.shape * image.shape, 1))
### END CODE HERE ###

return v

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15


# This is a 3 by 3 by 2 array, typically images will be (num_px_x, num_px_y,3) where 3 represents the RGB values
image = np.array([[[ 0.67826139,  0.29380381],
[ 0.90714982,  0.52835647],
[ 0.4215251 ,  0.45017551]],

[[ 0.92814219,  0.96677647],
[ 0.85304703,  0.52351845],
[ 0.19981397,  0.27417313]],

[[ 0.60659855,  0.00533165],
[ 0.10820313,  0.49978937],
[ 0.34144279,  0.94630077]]])

print ("image2vector(image) = " + str(image2vector(image)))

1

2

3

4

5

6

7

8

9

10

11

12

13

14


image2vector(image) = [[ 0.67826139]
[ 0.29380381]
[ 0.90714982]
[ 0.52835647]
[ 0.4215251 ]
[ 0.45017551]
[ 0.92814219]
[ 0.96677647]
[ 0.85304703]
[ 0.52351845]
[ 0.19981397]
[ 0.27417313]
[ 0.60659855]
[ 0.00533165]
[ 0.10820313]
[ 0.49978937]
[ 0.34144279]
[ 0.94630077]]


1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19



### 1.4 - Normalizing rows

Another common technique we use in Machine Learning and Deep Learning is to normalize our data. It often leads to a better performance because gradient descent converges faster after normalization. Here, by normalization we mean changing x to  xx  (dividing each row vector of x by its norm).

For example, if

x=(3)
then
x=np.linalg.norm(x,axis=1,keepdims=True)=(4)
and
x_normalized=xx=02563565645456(5)
Note that you can divide matrices of different sizes and it works fine: this is called broadcasting and you’re going to learn about it in part 5.

Exercise: Implement normalizeRows() to normalize the rows of a matrix. After applying this function to an input matrix x, each row of x should be a vector of unit length (meaning length 1).

# GRADED FUNCTION: normalizeRows

def normalizeRows(x):
""" Implement a function that normalizes each row of the matrix x (to have unit length). Argument: x -- A numpy matrix of shape (n, m) Returns: x -- The normalized (by row) numpy matrix. You are allowed to modify x. """

### START CODE HERE ### (≈ 2 lines of code)
# Compute x_norm as the norm 2 of x. Use np.linalg.norm(..., ord = 2, axis = ..., keepdims = True)
x_norm = np.linalg.norm(x, axis=1, keepdims = True)  #计算每一行的长度，得到一个列向量

# Divide x by its norm.
x = x / x_norm  #利用numpy的广播，用矩阵与列向量相除。
### END CODE HERE ###

return x

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22


x = np.array([
[0, 3, 4],
[1, 6, 4]])
print("normalizeRows(x) = " + str(normalizeRows(x)))

1

2

3

4


normalizeRows(x) = [[ 0.          0.6         0.8       ]
[ 0.13736056  0.82416338  0.54944226]]


1

2

3



Note
In normalizeRows(), you can try to print the shapes of x_norm and x, and then rerun the assessment. You’ll find out that they have different shapes. This is normal given that x_norm takes the norm of each row of x. So x_norm has the same number of rows but only 1 column. So how did it work when you divided x by x_norm? This is called broadcasting and we’ll talk about it now!

### 1.5 - Broadcasting and the softmax function

A very important concept to understand in numpy is “broadcasting”. It is very useful for performing mathematical operations between arrays of different shapes. For the full details on broadcasting, you can read the official broadcasting documentation.

Exercise: Implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs to classify two or more classes. You will learn more about softmax in the second course of this specialization.

Instructions
for xR1×nsoftmax(x)=softmax([x1x2xn])=[ex1jexjex2jexj...exnjexj]

• for a matrix xRm×nxij maps to the element in the ith row and jth column of x, thus we have:
softmax(x)=softmaxx11x21xm1x12x22xm2x13x23xm3x1nx2nxmn=ex11jex1jex21jex2jexm1jexmjex12jex1jex22jex2jexm2jexmjex13jex1jex23jex2jexm3jexmjex1njex1jex2njex2jexmnjexmj=softmax(first row of x)softmax(second row of x)...softmax(last row of x)
# GRADED FUNCTION: softmax

def softmax(x):
"""Calculates the softmax for each row of the input x. Your code should work for a row vector and also for matrices of shape (n, m). Argument: x -- A numpy matrix of shape (n,m) Returns: s -- A numpy matrix equal to the softmax of x, of shape (n,m) """

### START CODE HERE ### (≈ 3 lines of code)
# Apply exp() element-wise to x. Use np.exp(...).
x_exp = np.exp(x) # (n,m)

# Create a vector x_sum that sums each row of x_exp. Use np.sum(..., axis = 1, keepdims = True).
x_sum = np.sum(x_exp, axis = 1, keepdims = True) # (n,1)

# Compute softmax(x) by dividing x_exp by x_sum. It should automatically use numpy broadcasting.
s = x_exp / x_sum  # (n,m) 广播的作用

### END CODE HERE ###

return s

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27


x = np.array([
[9, 2, 5, 0, 0],
[7, 5, 0, 0 ,0]])
print("softmax(x) = " + str(softmax(x)))

1

2

3

4


softmax(x) = [[  9.80897665e-01   8.94462891e-04   1.79657674e-02   1.21052389e-04
1.21052389e-04]
[  8.78679856e-01   1.18916387e-01   8.01252314e-04   8.01252314e-04
8.01252314e-04]]


1

2

3

4

5



Note
- If you print the shapes of x_exp, x_sum and s above and rerun the assessment cell, you will see that x_sum is of shape (2,1) while x_exp and s are of shape (2,5). x_exp/x_sum works due to python broadcasting.

What you need to remember:
- np.exp(x) works for any np.array x and applies the exponential function to every coordinate
- the sigmoid function and its gradient
- image2vector is commonly used in deep learning
- np.reshape is widely used. In the future, you’ll see that keeping your matrix/vector dimensions straight will go toward eliminating a lot of bugs.
- numpy has efficient built-in functions

## 2) Vectorization

In deep learning, you deal with very large datasets. Hence, a non-computationally-optimal function can become a huge bottleneck in your algorithm and can result in a model that takes ages to run. To make sure that your code is computationally efficient, you will use vectorization. For example, try to tell the difference between the following implementations of the dot/outer/elementwise product.

import time

x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]

### CLASSIC DOT PRODUCT OF VECTORS IMPLEMENTATION ###
tic = time.process_time()
dot = 0
for i in range(len(x1)):
dot+= x1[i]*x2[i]
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")

### CLASSIC OUTER PRODUCT IMPLEMENTATION ###
tic = time.process_time()
outer = np.zeros((len(x1),len(x2))) # we create a len(x1)*len(x2) matrix with only zeros
for i in range(len(x1)):
for j in range(len(x2)):
outer[i,j] = x1[i]*x2[j]
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")

### CLASSIC ELEMENTWISE IMPLEMENTATION ###
tic = time.process_time()
mul = np.zeros(len(x1))
for i in range(len(x1)):
mul[i] = x1[i]*x2[i]
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")

### CLASSIC GENERAL DOT PRODUCT IMPLEMENTATION ###
W = np.random.rand(3,len(x1)) # Random 3*len(x1) numpy array
tic = time.process_time()
gdot = np.zeros(W.shape)
for i in range(W.shape):
for j in range(len(x1)):
gdot[i] += W[i,j]*x1[j]
toc = time.process_time()
print ("gdot = " + str(gdot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39


dot = 278
----- Computation time = 0.2854390000002205ms
outer = [[ 81.  18.  18.  81.   0.  81.  18.  45.   0.   0.  81.  18.  45.   0.
0.]
[ 18.   4.   4.  18.   0.  18.   4.  10.   0.   0.  18.   4.  10.   0.
0.]
[ 45.  10.  10.  45.   0.  45.  10.  25.   0.   0.  45.  10.  25.   0.
0.]
[  0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.
0.]
[  0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.
0.]
[ 63.  14.  14.  63.   0.  63.  14.  35.   0.   0.  63.  14.  35.   0.
0.]
[ 45.  10.  10.  45.   0.  45.  10.  25.   0.   0.  45.  10.  25.   0.
0.]
[  0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.
0.]
[  0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.
0.]
[  0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.
0.]
[ 81.  18.  18.  81.   0.  81.  18.  45.   0.   0.  81.  18.  45.   0.
0.]
[ 18.   4.   4.  18.   0.  18.   4.  10.   0.   0.  18.   4.  10.   0.
0.]
[ 45.  10.  10.  45.   0.  45.  10.  25.   0.   0.  45.  10.  25.   0.
0.]
[  0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.
0.]
[  0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.   0.
0.]]
----- Computation time = 0.340502999999881ms
elementwise multiplication = [ 81.   4.  10.   0.   0.  63.  10.   0.   0.   0.  81.   4.  25.   0.   0.]
----- Computation time = 0.21034700000011064ms
gdot = [ 31.19670632  24.24358575  24.08807423]
----- Computation time = 0.4973530000000892ms


1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38


x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]

### VECTORIZED DOT PRODUCT OF VECTORS ###
tic = time.process_time()
dot = np.dot(x1,x2)
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")

### VECTORIZED OUTER PRODUCT ###
tic = time.process_time()
outer = np.outer(x1,x2)
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")

### VECTORIZED ELEMENTWISE MULTIPLICATION ###
tic = time.process_time()
mul = np.multiply(x1,x2)
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")

### VECTORIZED GENERAL DOT PRODUCT ###
tic = time.process_time()
dot = np.dot(W,x1)
toc = time.process_time()
print ("gdot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26


dot = 278
----- Computation time = 0.17332000000003234ms
outer = [[81 18 18 81  0 81 18 45  0  0 81 18 45  0  0]
[18  4  4 18  0 18  4 10  0  0 18  4 10  0  0]
[45 10 10 45  0 45 10 25  0  0 45 10 25  0  0]
[ 0  0  0  0  0  0  0  0  0  0  0  0  0  0  0]
[ 0  0  0  0  0  0  0  0  0  0  0  0  0  0  0]
[63 14 14 63  0 63 14 35  0  0 63 14 35  0  0]
[45 10 10 45  0 45 10 25  0  0 45 10 25  0  0]
[ 0  0  0  0  0  0  0  0  0  0  0  0  0  0  0]
[ 0  0  0  0  0  0  0  0  0  0  0  0  0  0  0]
[ 0  0  0  0  0  0  0  0  0  0  0  0  0  0  0]
[81 18 18 81  0 81 18 45  0  0 81 18 45  0  0]
[18  4  4 18  0 18  4 10  0  0 18  4 10  0  0]
[45 10 10 45  0 45 10 25  0  0 45 10 25  0  0]
[ 0  0  0  0  0  0  0  0  0  0  0  0  0  0  0]
[ 0  0  0  0  0  0  0  0  0  0  0  0  0  0  0]]
----- Computation time = 0.15053899999983855ms
elementwise multiplication = [81  4 10  0  0 63 10  0  0  0 81  4 25  0  0]
----- Computation time = 0.11027100000005063ms
gdot = [ 31.19670632  24.24358575  24.08807423]
----- Computation time = 0.22889199999998056ms


1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23



As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the differences in running time become even bigger.

Note that np.dot() performs a matrix-matrix or matrix-vector multiplication. This is different from np.multiply() and the * operator (which is equivalent to .* in Matlab/Octave), which performs an element-wise multiplication.

### 2.1 Implement the L1 and L2 loss functions

Exercise: Implement the numpy vectorized version of the L1 loss. You may find the function abs(x) (absolute value of x) useful.

Reminder
- The loss is used to evaluate the performance of your model. The bigger your loss is, the more different your predictions ( y^ ) are from the true values ( y ). In deep learning, you use optimization algorithms like Gradient Descent to train your model and to minimize the cost.
- L1 loss is defined as:

L1(y^,y)=i=0m|y(i)y^(i)|(6)

# GRADED FUNCTION: L1

def L1(yhat, y):
""" Arguments: yhat -- vector of size m (predicted labels) y -- vector of size m (true labels) Returns: loss -- the value of the L1 loss function defined above """

### START CODE HERE ### (≈ 1 line of code)
loss = np.sum(np.abs(y - yhat))
### END CODE HERE ###

return loss

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17


yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L1 = " + str(L1(yhat,y)))

1

2

3


L1 = 1.1


1

2



Exercise: Implement the numpy vectorized version of the L2 loss. There are several way of implementing the L2 loss but you may find the function np.dot() useful. As a reminder, if  x=[x1,x2,...,xn] , then np.dot(x,x) =  nj=0x2j .

• L2 loss is defined as
L2(y^,y)=i=0m(y(i)y^(i))2(7)
# GRADED FUNCTION: L2

def L2(yhat, y):
""" Arguments: yhat -- vector of size m (predicted labels) y -- vector of size m (true labels) Returns: loss -- the value of the L2 loss function defined above """

### START CODE HERE ### (≈ 1 line of code)
loss =np.sum(np.power((y - yhat), 2))
### END CODE HERE ###

return loss

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17


yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L2 = " + str(L2(yhat,y)))

1

2

3


L2 = 0.43


1

2



What to remember:
- Vectorization is very important in deep learning. It provides computational efficiency and clarity.
- You have reviewed the L1 and L2 loss.
- You are familiar with many numpy functions such as np.sum, np.dot, np.multiply, np.maximum, etc…

# Part 2： Logistic Regression with a Neural Network mindset

You will learn to:
- Build the general architecture of a learning algorithm, including:
- Initializing parameters
- Calculating the cost function and its gradient
- Using an optimization algorithm (gradient descent)
- Gather all three functions above into a main model function, in the right order.

## 1 - Packages

First, let’s run the cell below to import all the packages that you will need during this assignment.
numpy is the fundamental package for scientific computing with Python.
h5py is a common package to interact with a dataset that is stored on an H5 file.
matplotlib is a famous library to plot graphs in Python.
PIL and scipy are used here to test your model with your own picture at the end.

import numpy as np
import matplotlib.pyplot as plt
import h5py
import scipy
from PIL import Image
from scipy import ndimage

% matplotlib inline

1

2

3

4

5

6

7

8

9



## 2 - Overview of the Problem set

Problem Statement: You are given a dataset (“data.h5”) containing:
- a training set of m_train images labeled as cat (y=1) or non-cat (y=0)
- a test set of m_test images labeled as cat or non-cat
- each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB). Thus, each image is square (height = num_px) and (width = num_px).

You will build a simple image-recognition algorithm that can correctly classify pictures as cat or non-cat.

Let’s get more familiar with the dataset. Load the data by running the following code.

# Loading the data (cat/non-cat)
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()

1

2



We added “_orig” at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don’t need any preprocessing).

Each line of your train_set_x_orig and test_set_x_orig is an array representing an image. You can visualize an example by running the following code. Feel free also to change the index value and re-run to see other images.

# Example of a picture
index = 25
plt.imshow(train_set_x_orig[index])
print ("y = " + str(train_set_y[:, index]) + ", it's a '" + classes[np.squeeze(train_set_y[:, index])].decode("utf-8") +  "' picture.")

1

2

3

4


y = , it's a 'cat' picture.


1

2 Many software bugs in deep learning come from having matrix/vector dimensions that don’t fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs.

Exercise: Find the values for:
- m_train (number of training examples)
- m_test (number of test examples)
- num_px (= height = width of a training image)
Remember that train_set_x_orig is a numpy-array of shape (m_train, num_px, num_px, 3). For instance, you can access m_train by writing train_set_x_orig.shape.

### START CODE HERE ### (≈ 3 lines of code)
m_train = train_set_x_orig.shape
m_test = test_set_x_orig.shape
num_px = train_set_x_orig.shape
### END CODE HERE ###

print ("Number of training examples: m_train = " + str(m_train))
print ("Number of testing examples: m_test = " + str(m_test))
print ("Height/Width of each image: num_px = " + str(num_px))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_set_x shape: " + str(train_set_x_orig.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x shape: " + str(test_set_x_orig.shape))
print ("test_set_y shape: " + str(test_set_y.shape))

1

2

3

4

5

6

7

8

9

10

11

12

13

14


Number of training examples: m_train = 209
Number of testing examples: m_test = 50
Height/Width of each image: num_px = 64
Each image is of size: (64, 64, 3)
train_set_x shape: (209, 64, 64, 3)
train_set_y shape: (1, 209)
test_set_x shape: (50, 64, 64, 3)
test_set_y shape: (1, 50)


1

2

3

4

5

6

7

8

9



For convenience, you should now reshape images of shape (num_px, num_px, 3) in a numpy-array of shape (num_px   num_px   3, 1). After this, our training (and test) dataset is a numpy-array where each column represents a flattened image. There should be m_train (respectively m_test) columns.

Exercise: Reshape the training and test data sets so that images of size (num_px, num_px, 3) are flattened into single vectors of shape (num_px   num_px   3, 1).

A trick when you want to flatten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape (b c d, a) is to use:

X_flatten = X.reshape(X.shape, -1).T      # X.T is the transpose of X

1


# Reshape the training and test examples

### START CODE HERE ### (≈ 2 lines of code)
train_set_x_flatten = train_set_x_orig.reshape(m_train, -1).T
test_set_x_flatten = test_set_x_orig.reshape(m_test, -1).T
### END CODE HERE ###

print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
print ("sanity check after reshaping: " + str(train_set_x_flatten[0:5,0]))

1

2

3

4

5

6

7

8

9

10

11

12


train_set_x_flatten shape: (12288, 209)
train_set_y shape: (1, 209)
test_set_x_flatten shape: (12288, 50)
test_set_y shape: (1, 50)
sanity check after reshaping: [17 31 56 22 33]


1

2

3

4

5

6



To represent color images, the red, green and blue channels (RGB) must be specified for each pixel, and so the pixel value is actually a vector of three numbers ranging from 0 to 255.

One common preprocessing step in machine learning is to center and standardize your dataset, meaning that you substract the mean of the whole numpy array from each example, and then divide each example by the standard deviation of the whole numpy array. But for picture datasets, it is simpler and more convenient and works almost as well to just divide every row of the dataset by 255 (the maximum value of a pixel channel).

Let’s standardize our dataset.

train_set_x = train_set_x_flatten/255.
test_set_x = test_set_x_flatten/255.

1

2



What you need to remember:

Common steps for pre-processing a new dataset are:
- Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, …)
- Reshape the datasets such that each example is now a vector of size (num_px * num_px * 3, 1)
- “Standardize” the data

## 3 - General Architecture of the learning algorithm

It’s time to design a simple algorithm to distinguish cat images from non-cat images.

You will build a Logistic Regression, using a Neural Network mindset. The following Figure explains why Logistic Regression is actually a very simple Neural Network! Mathematical expression of the algorithm:

For one example  x(i)

z(i)=wTx(i)+b(1)

y^(i)=a(i)=sigmoid(z(i))(2)

L(a(i