BO-RL: Buffer Optimization in Data Plane Using Reinforcement Learning

Sneha, M. and Kataoka, Kotaro and Shobha, G. (2021) BO-RL: Buffer Optimization in Data Plane Using Reinforcement Learning. Lecture Notes in Networks and Systems, 225. pp. 355-369. ISSN 2367-3370

Full text not available from this repository. (Request a copy)

Abstract

Fine tuning of the buffer size is well known technique to improve the latency and throughput in the network. However, it is difficult to achieve because the microscopic traffic pattern changes dynamically and affected by many factors in the network, and fine grained information to predict the optimum buffer size for the upcoming moment is difficult to calculate. To address this problem, this paper proposes a new approach, Buffer Optimization using Reinforcement Learning (BO-RL), which can dynamically adjust the buffer size of routers based on the observations on the network environment including routers and end devices. The proof of concept implementation was developed using NS-3, OpenAI Gym and TensorFlow to integrate the Reinforcement Learning (RL) agent and the router to dynamically adjust its buffer size. This paper reports the working of BO-RL, and the results of preliminary experiments in a network topology with limited number of nodes. Significant improvements are observed in the end to end delay and the average throughput by applying BO-RL on a router.

[error in script]
IITH Creators:
IITH CreatorsORCiD
Kataoka, KotaroUNSPECIFIED
Item Type: Article
Subjects: Computer science
Divisions: Department of Computer Science & Engineering
Depositing User: . LibTrainee 2021
Date Deposited: 29 Jul 2021 05:29
Last Modified: 29 Jul 2021 05:29
URI: http://raiithold.iith.ac.in/id/eprint/8559
Publisher URL: http://doi.org/10.1007/978-3-030-75100-5_31
OA policy: https://v2.sherpa.ac.uk/id/publication/33093
Related URLs:

Actions (login required)

View Item View Item
Statistics for RAIITH ePrint 8559 Statistics for this ePrint Item