Batch Gradient Descent Mini-Batch Gradient Descent and Stochastic Gradient Descent

batch gradient descent
理想状态下经过足够多的迭代后可以达到全局最优。
很难处理大数据集

1
2
3
4
5
6
7
8
9
10
11
12
X = data_input
Y = labels
parameters = initialize_parameters(layers_dims)
for i in range(0, num_iterations): #num_iterations--迭代次数
# Forward propagation
a, caches = forward_propagation(X, parameters)
# Compute cost.
cost = compute_cost(a, Y)
# Backward propagation.
grads = backward_propagation(a, caches, parameters)
# Update parameters.
parameters = update_parameters(parameters, grads)

stochastic gradient descent
每次也可以训练一小批样本
随机性避免陷入局部最优,但是每次更新方向会在总体的梯度向量方向上震荡。
和遗传算法有点像

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
X = data_input
Y = labels
permutation = list(np.random.permutation(m))
shuffled_X = X[:, permutation]
shuffled_Y = Y[:, permutation].reshape((1, m))
for i in range(0, num_iterations):
for j in range(0, m): # 每次训练一个样本
# Forward propagation
AL,caches = forward_propagation(shuffled_X[:, j].reshape(-1,1), parameters)
# Compute cost
cost = compute_cost(AL, shuffled_Y[:, j].reshape(1,1))
# Backward propagation
grads = backward_propagation(AL, shuffled_Y[:,j].reshape(1,1), caches)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)

Mini-batch gradient descent
batch_size =m = 2 ** n
深度学习基本都用 Mini-batch gradient descent
SGD = stochastic mini-batch gradient descent

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
# GRADED FUNCTION: random_mini_batches
def random_mini_batches(X, Y, mini_batch_size = 64, seed = 0):
"""
Creates a list of random minibatches from (X, Y)
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)
mini_batch_size -- size of the mini-batches, integer

Returns:
mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)
"""
np.random.seed(seed) # To make your "random" minibatches the same as ours
m = X.shape[1] # number of training examples
mini_batches = []

# Step 1: Shuffle (X, Y)
permutation = list(np.random.permutation(m))
shuffled_X = X[:, permutation]
shuffled_Y = Y[:, permutation].reshape((1,m))

# Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case.
num_complete_minibatches = m//mini_batch_size # number of mini batches
for k in range(0, num_complete_minibatches):
mini_batch_X = shuffled_X[:, k * mini_batch_size: (k + 1) * mini_batch_size]
mini_batch_Y = shuffled_Y[:, k * mini_batch_size: (k + 1) * mini_batch_size]
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)

# Handling the end case (last mini-batch < mini_batch_size)
if m % mini_batch_size != 0:
mini_batch_X = shuffled_X[:, num_complete_minibatches * mini_batch_size : m]
mini_batch_Y = shuffled_Y[:, num_complete_minibatches * mini_batch_size : m]
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)

return mini_batches
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
seed = 0
for i in range(0, num_iterations):
# Define the random minibatches. We increment the seed to reshuffle differently the dataset after each epoch
seed = seed + 1
minibatches = random_mini_batches(X, Y, mini_batch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# Forward propagation
AL, caches = forward_propagation(minibatch_X, parameters)
# Compute cost
cost = compute_cost(AL, minibatch_Y)
# Backward propagation
grads = backward_propagation(AL, minibatch_Y, caches)
parameters = update_parameters(parameters, grads, learning_rate)

超参数batch_size