Momentum-based Gradient Optimizer introduction

Course Curriculum

Momentum-based Gradient Optimizer introduction

Momentum-based Gradient Optimizer introduction

Gradient Descent is an optimization technique used in Machine Learning frameworks to train different models. The training process consists of an objective function (or the error function), which determines the error a Machine Learning model has on a given dataset.
While training, the parameters of this algorithm are initialized to random values. As the algorithm iterates, the parameters are updated such that we reach closer and closer to the optimal value of the function.

However, Adaptive Optimization Algorithms are gaining popularity due to their ability to converge swiftly. All these algorithms, in contrast to the conventional Gradient Descent, use statistics from the previous iterations to robustify the process of convergence.

Momentum-based Optimization:
An Adaptive Optimization Algorithm which uses exponentially weighted averages of gradients over previous iterations to stabilize the convergence, resulting in quicker optimization. For example, in most real-world applications of Deep Neural Networks, the training is carried out on noisy data. It is, therefore, necessary to reduce the effect of noise when the data are fed in batches during Optimization. This problem can be tackled using Exponentially Weighted Averages (or Exponentially Weighted Moving Averages).

Implementing Exponentially Weighted Averages:
In order to approximate the trends in a noisy dataset of size N:
theta_{0}, theta_{1}, theta_{2}, ..., theta_{N}, we maintain a set of parameters v_{0}, v_{1}, v_{2}, v_{3}, ..., v_{N}. As we iterate through all the values in the dataset, we calculate the parameters as below:

On iteration t:
Get next theta_{t}
v_{theta} = beta v_{theta} + (1 - beta) theta_{t}
This algorithm averages the value of v_{theta} over its values from previous frac{1}{1 - beta} iterations. This averaging ensures that only the trend is retained and the noise is averaged out. This method is used as a strategy in momentum based gradient descent to make it robust against noise in data samples, resulting in faster training.

As an example, if you were to optimize a function f(x) on the parameter x, the following pseudo code illustrates the algorithm:

On iteration t:
On the current batch, compute frac{partial f(x)}{partial x}
v := v + (1 - beta) frac{partial f(x)}{partial x}
x := x - alpha v
The HyperParameters for this Optimization Algorithm are alpha, called the Learning Rate and, beta, similar to acceleration in mechanics.

Following is an implementation of Momentum-based Gradient Descent on a function f(x) = x ^ 2 - 4 x + 4 :

import math

# HyperParameters of the optimization algorithm
alpha = 0.01
beta = 0.9

# Objective function
def obj_func(x):
return x * x - 4 * x + 4

# Gradient of the objective function
def grad(x):
return 2 * x - 4

# Parameter of the objective function
x = 0

# Number of iterations
iterations = 0

v = 0

while (1):
iterations += 1
v = beta * v + (1 - beta) * grad(x)

x_prev = x

x = x - alpha * v

print("Value of objective function on iteration", iterations, "is", x)

if x_prev == x:
print("Done optimizing the objective function. ")
break

Optimization techniques for Gradient Descent (Prev Lesson)
(Next Lesson) Linear Regression