Published in

SAGE Publications, International Journal of Distributed Sensor Networks, 1(15), p. 155014771882399, 2019

DOI: 10.1177/1550147718823994

Links

Tools

Export citation

Search in Google Scholar

Fog server deployment technique: An approach based on computing resource usage

Journal article published in 2019 by Jung-Hoon Lee ORCID, Sang-Hwa Chung, Won-Suk Kim
This paper is made freely available by the publisher.
This paper is made freely available by the publisher.

Full text: Download

Red circle
Preprint: archiving forbidden
Red circle
Postprint: archiving forbidden
Green circle
Published version: archiving allowed
Data provided by SHERPA/RoMEO

Abstract

Cloud computing is a type of Internet-based computing that allows users to access computing resources that are connected to the Internet anytime and anywhere. Recently, as the Internet-of-Things market using the cloud has grown, a tremendous amount of data has been generated, and services requiring low latency are increasing. To solve these problems, a new architecture called fog computing has been proposed. Fog computing can process data on a network device close to the user, drastically reducing the bandwidth required from the network and providing near real-time response. However, not much research has been done on which network devices should be used to deploy the fog server. In this article, we propose a fog server deployment technique to minimize the data movement path in a fog computing environment and a technique to make full use of the computing resources of a fog device through a vector bin packing algorithm in a situation where many services are concentrated on one network device. Experimental results show that the proposed algorithm can reduce the data movement distance and maximize the utilization of the computing resources of the fog device.