Home // ICSEA 2019, The Fourteenth International Conference on Software Engineering Advances // View article


Comparative Evaluation of Input Features Used for Deep Neural Networks to Recognize Semantic Indoor Scene from Time-Series Images Obtained Using Mobile Robot

Authors:
Hirokazu Madokoro
Hanwool Woo
Kazuhito Sato

Keywords: bags of features; category maps; convolutional neural networks; counter propagation networks; self-organizing maps; and semantic indoor scene recognition.

Abstract:
Human living indoor environments are changing continuously according to our various lifestyles and activities. Human-symbiotic robots require advanced capabilities of environmental understanding and adaptation. For robotic environmental adaptation, numerous machine-learning-based approaches have been proposed. Moreover, numerous types of features such as brightness, edges, texture, etc. have been used for learning networks. This study was conducted to evaluate combinations of supervised-learning-based indoor scene recognition methods and their input features. This paper presents a framework to provide image features of three types according to learning strategies. The experimentally obtained results evaluate using two open benchmark datasets revealed suitable combinations of input features including weights obtained from category maps of Counter Propagation Networks (CNNs) used for Deep Neural Networks (DNNs). We demonstrate a suitable combination of features from scene images used for semantic indoor scene recognition. Particularly, higher recognition accuracy is obtainable using original time-series images for learning with CNNs.

Pages: 190 to 195

Copyright: Copyright (c) IARIA, 2019

Publication date: November 24, 2019

Published in: conference

ISSN: 2308-4235

ISBN: 978-1-61208-752-8

Location: Valencia, Spain

Dates: from November 24, 2019 to November 28, 2019