Home // International Journal On Advances in Telecommunications, volume 14, numbers 1 and 2, 2021 // View article


A Probabilistic Learning Reinforcement Model for the Performance Analysis of Multimedia Indexing and Packet Switching

Authors:
Clement Leung
Yao Tong

Keywords: computer networks; packet switching; multimedia information indexing; reinforcement learning; multi-agent; naïve Bayes classifiers; stochastic game

Abstract:
A stochastic model of binary classification in the presence of noise is considered where classification outcomes are non-deterministic. To ensure the correctness of a particular classification decision, repeated reinforcements need to be acquired. By accumulating sufficient reinforcements, one would learn to predict the class label. In this study, we develop a probabilistic learning reinforcement classification model and apply it to multimedia information indexing and to noisy network transmission. Three learning strategies are analyzed. The first one requires the accumulation of a total of a given number of pre-specified positive labels, while the second one builds from the first and requires additionally that such reinforcements occur consecutively in the observation sequence. The third strategy views the classification process from a multi-agent stochastic game perspective, with the labelling decision determined by which class label attaining a given threshold first. The model characteristics are studied for the three different strategies and key measures of performance are obtained. The model is applied to fault-tolerant network communications over a noisy channel, with learning success corresponding to error-free transmission of data packets, and to multimedia indexing where learning success correspond to the successful automatic installation of an index term to a particular data object. The present learning paradigm will be useful in allowing the effectiveness and performance of these systems and similar ones to be meaningfully quantified and evaluated.

Pages: 1 to 9

Copyright: Copyright (c) to authors, 2021. Used with permission.

Publication date: December 31, 2021

Published in: journal

ISSN: 1942-2601