Mobile Edge Computing (MEC) is a key technology to support the emerging low-latency Internet of Things (IoT) applications. With computing servers deployed at the network edge, the computational tasks generated by mobile users can be offloaded to these MEC servers and executed there with low latency. Meanwhile, with the ever-increasing number of mobile users, the communication resource for offloading and the computational resource allocated to each user would become quite limited. As a result, it would be difficult for the MEC servers alone to process all the tasks in a timely manner. An effective approach to deal with this challenge is offloading a proportion of the tasks at MEC servers to the cloud servers, such that both types of servers are efficiently utilized to reduce latency. Given multiple MEC and cloud servers and the dynamics of communication latency, intelligent task assignment between different servers is required. In this paper, we propose a deep reinforcement learning (DRL) based task assignment scheme for MEC networks, aiming to minimize the average task processing latency. Two design parameters of task assignment are optimized, including cloud server selection and task partitioning. Such a problem is formulated as a Markov Decision Process (MDP) and solved with a DRL-based approach, which enables the edge servers to capture the system dynamics and make optimized task assignment strategies accordingly. Simulation results show that the proposed scheme can significantly lower the average task completion latency.
Mobile edge computing is a new distributed computing paradigm which brings computation and data storage closer to the location where it is needed, to improve response times and save bandwidth in the dynamic mobile networking environment. Despite the improvements in network technology, data centers cannot always guarantee acceptable transfer rates and response times, which could be a critical requirement for many applications. The aim of mobile edge computing is to move the computation away from data centers towards the edge of the network, exploiting smart objects, mobile phones or network gateways to perform tasks and provide services on behalf of the cloud. In this paper, we design a task offloading scheme in the mobile edge network to handle the task distribution, offloading and management by applying deep reinforcement learning. Specifically, we formulate the task offloading problem as a multi-agent reinforcement learning problem. The decision-making process of each agent is modeled as a Markov decision process and deep Q-learning approach is applied to deal with the large scale of states and actions. To evaluate the performance of our proposed scheme, we develop a simulation environment for the mobile edge computing scenario. Our preliminary evaluation results with a simplified multi-armed bandit model indicate that our proposed solution can provide lower latency for the computational intensive tasks in mobile edge network, and outperforms than naïve task offloading method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.