Home // ICCGI 2023, The Eighteenth International Multi-Conference on Computing in the Global Information Technology // View article


Weight Difference Propagation for Stochastic Gradient Descent Learning

Authors:
Shahrzad Mahboubi
Hiroshi Ninomiya

Keywords: neural network, gradient-based training algorithm, stochastic gradient descent mmethod, error back propagation, weight difference propagation.

Abstract:
This paper proposes a new stochastic (mini-batch) training algorithm to reduce the computational and hardware implementation costs of the error Back Propagation (BP) method. In recent years, with the rapid development of IoT, there has been an increasing necessity to use microcomputers, especially edge computers that implement neural networks capable of training large-scale data. Since the neural network training is based on the BP method, the error propagation architecture from the output to the input layers is required for each training sample to calculate the gradient. As a result, the hardware and computational costs are increased. This paper attempts to improve the BP method by using the inner product of the weights and their updated amounts (differences) for training reducing the hardware and computational costs. This method eliminates the requirement of the BP architecture for each training sample in mini-batch and calculates the amounts of the weight update with only one weight difference propagation. This means that the proposed method can reduce the computational complexity in the backward calculations of the training to 1/

Pages: 12 to 17

Copyright: Copyright (c) IARIA, 2023

Publication date: March 13, 2023

Published in: conference

ISSN: 2308-4529

ISBN: 978-1-68558-059-9

Location: Barcelona, Spain

Dates: from March 13, 2023 to March 17, 2023