Online dynamic multi-user computation offloading and resource allocation for HAP-assisted MEC: an energy efficient approach

Nowadays, the paradigm of mobile computing is evolving from a centralized cloud model towards Mobile Edge Computing (MEC). In regions without ground communication infrastructure, incorporating aerial edge computing nodes into network emerges as an efficient approach to deliver Artificial Intelligence (AI) services to Ground Devices (GDs). The computation offloading and resource allocation problem within a HAP-assisted MEC system is investigated in this paper. Our goal is to minimize the energy consumption. Considering the randomness and dynamism of the task arrival of GDs and the quality of wireless communication, stochastic optimization techniques are utilized to transform the long-term dynamic optimization problem into a deterministic optimization problem. Subsequently, the problem is further decomposed into three sub-problems which can be solved in parallel. An online Energy Efficient Dynamic Offloading (EEDO) algorithm is proposed to address these problems. Then, we conduct the theoretical performance analysis for EEDO. Finally, we carry out parameter analysis and comparative experiments, demonstrating that the EEDO algorithm can effectively reduce system energy consumption while maintaining the stability of the system.


Introduction
With the increasing advancement of information technology and 6G, there has been a rapid increase in the number of intelligent terminals and emerging Artificial Intelligence (AI) applications [1].By 2025, it is estimated that the number of global intelligent terminal devices will exceed 25 billion and will continue to grow rapidly in the following decades [2].Moreover, computing-intensive AI applications such as ultra-high-definition video streaming analysis, intelligent driving, augmented reality, and facial recognition are developing swiftly, leading to a massive demand for computing-intensive tasks from devices [3,4].However, due to the limited battery capacity and computational resources of these devices, it becomes challenging or even impossible to process all the tasks generated by these AI applications locally [5].
Mobile Cloud Computing has large task processing capabilities.By offloading AI tasks to the cloud, it can effectively reduce the processing burden on end-user devices [6].However, the distance between end-user devices and cloud computing infrastructure, along with network capacity constraints, may lead to significant transmission delays and energy consumption [7].Furthermore, offloading a large amount of data to the cloud could cause network overload and congestion.Mobile Edge Computing (MEC), as an emerging computing paradigm, provides a powerful way to solve this problem [8].By moving cloud computing resources and services closer to the AI data generation and processing location, MEC can bring numerous benefits to users, such as reducing service latency, decreasing network congestion, and improving service quality [9].
The base stations of traditional MEC deployed on cellular networks are typically fixed to the ground and immovable [10].However, for some remote regions like oceans, wilderness, and deserts, ground-based MEC networks struggle to provide coverage [11].Airborne devices with extensive coverage and strong computing capabilities have become a research hotspot for overcoming the limitations of groundbased MEC.High Altitude Platforms (HAPs) with their large coverage area and computing capacities, can serve as airborne base stations providing services to Ground Devices (GDs), thus becoming an important research problem [12].HAPs have advantages such as low transmission delay, robust computing capability, wide service area, and prolonged endurance [13].Additionally, HAPs can be flexibly deployed according to specific circumstances, offering better MEC services to GDs.Consequently, the issue of computation offloading in MEC systems supported by HAP is gaining extensive attention [13].
In this manuscript, we study the online dynamic computation offloading and the allocation of resources among multiple users within the HAP-assisted MEC framework.The aim is to optimize energy efficiency alongside maintaining system stability.The control variables for decision-making are: 1) GD's local CPU cycle frequency, 2) the size of computation tasks offloaded by GDs, and 3) the computational resources allocated by the HAP.Based on stochastic optimization techniques, an online approach is proposed to tackle these challenges.The proposed Energy Efficient Dynamic Offloading (EEDO) algorithm is designed to adaptively make the computation offloading and resource allocation decisions.Extensive experiments including both parameter analysis and comparison experiments validate EEDO's efficacy.The main contributions of this study are as follows: 1 We study the task offloading and resource allocation problem in a HAP-assisted MEC system, where multiple GDs process tasks locally or offload tasks to HAP.Our goal is to minimize system energy consumption.The local CPU cycle frequency of GDs, the size of computation tasks offloaded by GDs, and the resources allocated by the HAP are utilized to optimize system performance while taking into account the randomness of task arrival and the uncertainty of the communication quality. 2 We employ stochastic optimization techniques to transform the original stochastic optimization problem into three sub-problems that can be solved in parallel, and design the EEDO algorithm to effectively reduce system energy consumption without prior statistical task arrival or communication information.Through theoretical analysis, the gap between the solution of algorithm and the optimal solution is proven.3 We conduct extensive experiments to evaluate the EEDO algorithm.Through parameter analysis, it is demonstrated that EEDO can achieve a balance between energy consumption and performance and effectively adapt to various task arrival rates and the number of GDs.And comparative experiments illustrate the algorithm significantly outperforms the comparison algorithms in reducing system energy consumption and ensuring system performance.
The remaining sections of this manuscript are structured as below: Section 2 builds the system model and formalizes the problem.Section 3 presents our algorithm, employing stochastic optimization to partition and resolve the original optimization problem.Section 4 provides a theoretical analysis of the EEDO algorithm.Section 5 conducts the parameter and comparison experiments.Section 6 discusses recent related works, and Section 7 concludes our works.

System Model
This paper considers a HAP-assisted MEC framework including multiple GDs which can offload tasks to the edge server deployed on HAP, as shown in Fig. 1.The ground layer includes N GDs, denoted by N = {1, 2, ..., i, ..., N } .Ascending to the aerial section, a HAP serves as the edge server, providing ground devices with computational AI services and data processing capabilities.Within our formulated system, time is segmented into discrete intervals as time slots, delineated as T = {0, 1, ..., t, ..., T − 1} , with the duration of each interval τ .Typically, GDs may have vari- ous computational demands within each time slot, thereby generating a substantial volume of computation-intensive tasks.The computational resource requirements of these tasks may surpass the processing capabilities of the local devices, making it unfeasible for the GDs to handle the tasks locally.Specifically, there are A i (t) tasks that arrive at each GD in each time slot t, where the GD can process a portion of the tasks locally and offload a portion of the tasks to the HAP for processing.Upon receiving the offloaded tasks, the HAP allocates computational resources to these tasks from multiple GDs based on the backlog of the entire system during that time slot t (Table 1).

Communication model
GDs establish communication with the HAP, which is stationed at a consistent altitude within the stratosphere.
Our analysis incorporates both direct Line-of-Sight (LoS) and indirect Non-Line-of-Sight (NLoS) transmission models to describe the communication model.
Based on [14], the path loss for communication between given GD i and the HAP is: Here, f c is defined as the carrier frequency utilized by the communication system, c represents the constant speed of light, and the variables d i along with r i respectively denote the vertical and horizontal distance (1) between GD i and the HAP.The coefficients η LoS Building upon the aforementioned path loss model, the channel gain h i (t) between GD i and the HAP dur- ing slot t is deduced as: Furthermore, consider the GDs communicate with the HAP using Frequency Division Multiple Access (FDMA) technology [15].Therefore, the data rate for GD i at given slot t can be represented as: Here, W i is the bandwidth spectrum apportioned to GD i by the HAP, P i (t) the signal strength emitted by GD i, and σ 2 represents the intensity of the omnipresent Gaussian white noise. (2) Fig. 1 The HAP-assised MEC framework

Task and queue model
In the model, each GD i maintains a task queue denoted as Q i (t) to store the tasks that arrive in each time slot [16].GDs effectively allocate computational resources to handle tasks by adopting a partial offloading mechanism.
The size of tasks processed on-device by GD i is denoted by D i,l (t) , which indicates the size (in bits) of tasks pro- cessed by GD i itself in the given slot t, and is calculated as: In this expression, ϕ i is the CPU cycles number for GD i to process a single bit of data, f i (t) represents the opera- tional frequency of GD i's CPU within slot t, constrained by Additionally, GDs can perform task offloading, with the size of tasks offloaded to the HAP represented as: For each time slot t, the size of newly generated tasks at GD i are characterized as A i (t) .Following this, the evolu- tion of the task queue for GD i is: And max{Q i (t) − D i,l (t) − D i,o (t), 0} is used to ensure that the server does not process more tasks from GD i during time slot t than that already in the queue, in order to ensure the stability of the task queue.
In the multi-layer computing architecture we have considered, the HAP maintains a task queue for each GD to store the tasks offloaded from GD i to the HAP, denoted as H i (t) .This queue carries all tasks offloaded from the corresponding GD to the HAP, performing centralized processing and resource optimization.
At slot t, the size of tasks processed by the HAP for GD i is denoted as C i (t) .The update process of the HAP task queue is described as follows:

Energy consumption model
For this HAP-assisted MEC system, our focus is on minimizing the energy consumption, including the on-device energy consumption of GDs' local computing, the energy for offloading data transmissions by GDs, and the energy required for computations by the HAP.
The energy consumed by the on-device computing of GDs is intricately related to the CPU chip design.For GD i, the on-device computing energy consumption labeled as E i,l (t) , is formulated as: (5 Here, ξ i denotes the efficient switched capacitance, f i (t) stands for the CPU's processing frequency, and ϕ i corre- sponds to the requisite CPU cycles for processing each data bit [17].
The energy consumed by GDs to offload tasks to the HAP is: P i (t) characterizes the power for transmission by GD i within the slot t.
The energy consumption of the HAP processing tasks from GD i is denoted as E i,h (t) and is elucidated as: In this equation, l 1 denotes the energy expenditure by the HAP for each bit of data processed.
The cumulative energy consumption for the system, designated as E(t) , aggregates all the three energy consumption parts.The goal is to minimize

Problem formulation
Minimizing energy consumption is crucial as it directly affects the system's operational sustainability and costs.Our objective is to devise an online dynamic offloading strategy that minimizes the energy consumed by the following control variables: the computational frequencies of GDs, the offloading policy, and the resource allocation of the HAP.The variable set for decision-making is depicted as X (t) = {f (t), D o (t), C(t)} .Below is the prob- lem formulation, subject to the following constraints: The problem presented is a stochastic optimization problem.Owing to the unpredictable arrival pattern of tasks and communication channels, the statistics information is difficult to predict.Therefore, we take advantage of stochastic optimization theory to solve problem P 1 . (9

Energy efficient dynamic offloading algorithm design
Here, we utilize stochastic optimization theory to convert the previously formulated problem P 1 to a more manage- able deterministic one.By decomposing the transformed problem into several subproblems, the complexity of problem-solving is reduced.Subsequently, we design the EEDO algorithm to address these issues.Due to the inherent nature of stochastic optimization techniques, the EEDO algorithm can still obtain asymptotically optimal offloading decisions, even in the absence of future statistical information.

Problem transformation
We designate �(t) = [Q(t), H(t)] as the vector repre- senting the task queue backlog.The Lyapunov function is defined to quantify the queue backlog, To maintain system queue stability, we establish the Lyapunov drift function.This function quantifies the transition in the system's state (queue's state) from a given instant t to the subsequent instant t + 1 , expressed as The aim is to reduce the system's energy expenditure.In pursuing this goal, combing both queue length and energy consumption, we aim to optimize both the system's energy consumption and queue performance.Hence, the drift-plus-penalty function is defined as ( 14) where V is a penalty weight that trades off system energy consumption against queue stability.

Theorem 1
No matter what the queue backlog or the task offloading decisions are, Eq. ( 16) adheres to the subsequent relationship: Then, we can derive Further, we can obtain ( 17) is a constant.
The original stochastic optimization problem is converted as follows: subject to the constraints:

Energy efficient dynamic offloading algorithm
This part designs the online Energy Efficient Dynamic Offloading (EEDO) algorithm aimed at reducing the ceiling of Eq. ( 16).We can see that in the transformed problem, the decisions f (t), D o (t), and C(t) are decoupled.Thus, problem P 2 can be decoupled into three subprob- lems.Next, we will describe these subproblems one by one and provide their corresponding solutions.

Local CPU frequency allocation for GDs
By extracting the part related to decision f (t) from prob- lem P 2 , we obtain the local CPU frequency allocation subproblem for GDs as follows: subject to: It is a convex optimization problem.By deducing the primary derivative to a null value, we can get 3V ξ i ϕ i .Consequently, the prime solution for the local CPU frequency is:

Offloading computation allocation for GDs
By extracting the part related to offloading computation, we can obtain the following optimization problem: subject to: This is a problem of linear programming, and the solution for GDs' offloading computation is as follows:

Computation resource allocation for HAP
By extracting the portion related to decision C(t) from problem P 2 , we formulate the computation resource allocation subproblem for the HAP as follows: subject to: This problem is analogous to a knapsack problem, with the weight coefficient for C i (t) being V l − H i (t) .The capacity of the knapsack is the amount of tasks that the HAP can process.Below, we provide the solution process for this problem.
(1) Set the baseline for tasks the HAP can process during slot t as denotes the HAP's maximum CPU frequency, and ϕ h represents the CPU cycle number for the HAP to process a single bit of data.
(2) Sort all GDs in ascending order of the weight Vl 1 − H i (t) to obtain the order in which computa- tional resources are allocated.(3) The HAP allocates computational resources to GDs according to the order of sorting, obtaining the amount of tasks that the HAP can process for GD i as follows: (26) min otherwise.
(28) min (4) Update the remaining size of tasks that the HAP can deal with as: C h = C h − C i (t).(5) Repeat steps ( 3) and ( 4) until there are no tasks left for the HAP to deal with in slot t, or no GD requires further allocation of computational resources for task processing.
Subsequently, we present the detailed EEDO algorithm in Algorithm 1.

Algorithm 1 Energy Efficient Dynamic Offloading (EEDO) Algorithm of the energy efficient dynamic offloading algorithm
Here, we examine the EEDO algorithm's efficacy from a mathematical perspective.Lemma 1 is presented as follows.
Lemma 1 For any change in the task arrival rate , we can obtain an offloading decision π * , which is independ- ent of the current task queue and satisfies where E * ( ) symbolizes the minimum total energy consumption.

Proof
Caratheodory's theorem [18] is utilized to derive Lemma 1. Similar to related works and for the sake of brevity, the details of the proof have been omitted here.(30) E{ Ēπ * (t)} ≤E * ( ), It is noteworthy that the task arrival rate is finite, which indicates that the system's energy consumption is also finite.Thus, we denote the upper and lower bounds of the system's energy consumption as Ê and Ě , respec- tively.Subsequently, we define the average queue length as J = lim . Building on Lemma 1, we establish the upper bounds of the system's energy consumption and queue dimensions in Theorem 2.

Proof
With Lemma 1, for any arbitrary stochastic offloading decision π and task arrival rate + ǫ , we have: For any offloading decision π , we can derive: By substituting Eq. (33) into Eq.(34) and summing over all time slots, the following is derived: Since ǫ, Q i (t) , and H i (t) are all non-negative, we can deduce: When Eq. ( 36) is divided by VT and as ǫ → 0, T → ∞ , Eq. ( 31) is proven. (31) Furthermore, with Eq. (35), we also get: Given the non-negativity of E(E(t)) , the equation sim- plifies to: By dividing both sides of Eq. ( 38) by ǫT , and as T → ∞ , Eq. ( 32) is proven.

Experiment settings
In this part, extensive experiments are done to evaluate the efficacy of the EEDO algorithm.A HAP is deployed to serve GDs across a 1km × 1km remote zone.GDs are distributed randomly in this area, while the HAP remains static at a predetermined elevation.The size of tasks arriving A i (t) adheres to a uniform distribution rang- ing from [0, 1.8] × 10 6 bits.GDs operate at a CPU cycle frequency 1GHz , with transmission power described by a distribution P(t) ∼ [0.01, 0.2] W [12].The HAP has a maximum CPU cycle frequency of 20GHz .The aggre- gate bandwidth available for communication between GDs and the HAP is 100MHz .Additionally, ξ i = 10 −27 , σ 2 = 10 −13 W, and ϕ i = 1000 cycles/bit [14].The key parameter configurations are enumerated in Table 2.

Impact of parameter V
We select a set of different V values for analysis to validate the impact on the system's energy consumed and mean queue length.The change in energy consumed and queue length in response to varying V are depicted in Figs. 2 and 3. Figure 2 illustrates a decreasing trend in the system's energy consumed correlating with ascending V values.This is because a larger control parameter V indicates the system's tendency to prioritize energy (37) optimization, which aligns with the results in Eq. (31). Figure 3 illustrates an increase in the queue length as V increases.This phenomenon can be ascribed to the bounded processing and data transmission capacities of GDs, which are capable of handling only a finite amount of tasks, consequently resulting in an accumulation of pending tasks.Such findings are in alignment with the results of Eq. (32).Therefore, the EEDO proves effective in reducing the energy consumption of the system whilst preserving the equilibrium of the task queue.

Impact of task arrival rate
Figures 4 and 5 illustrate the variations in the system's energy consumed and the mean queue length corresponding to diverse arrival rates of the task.Here, the arrival rate of the task is denoted by αA i (t) , and α = 0.6 , 0.8, and 1.0.It is observed in Fig. 4 that the consumption of energy within the system rises with the rising arrival rate of the task.In a similar vein, Fig. 5   correlation between the arrival rate of the task and the average queue length.Nonetheless, it is observed that in a short time period, both energy consumption and queue length attain a state of equilibrium.Thus, the EEDO algorithm can dynamically adjust offloading decisions, allowing the system to quickly stabilize.

Impact of GDs number
Figures 6 and 7 illustrate the influence of varying numbers of GDs on the system's energy consumed and mean queue length.Figure 6 reveals an increasing trend in energy consumed as the GD number increases.In contrast, Fig. 7 exhibits the continuous increase in mean queue length with more GDs, due to the HAP's finite processing capacities.As the number of GDs increases, some tasks cannot be processed timely, resulting in a continuous increase in queue length.

Comparison experiment
Herein, we analyze the EEDO algorithm's efficiency via comparative experiments.We compare the EEDO algorithm with three other algorithms, which are described as follows: • Local-only algorithm: Each GD processes all newly arrived tasks by itself.• Offload-only algorithm: Each GD offloads all newly arrived tasks to the HAP for processing.• GTCO-21 algorithm: Each GD adopts the greedy approach extended from [19] to perform task offloading.Figures 8 and 9 illustrate the system's energy consumed and queue length with these algorithms.Compared to the Offload-only and GTCO-21 algorithms, our EEDO algorithm can reduce system energy consumption while maintaining queue stability.With the Offload-only algorithm, the HAP's finite computational capacity leads to unprocessed task accumulation and a continuous increase in queue length.For the GTCO-21 algorithm, since all GDs want to offload as many tasks as possible to the HAP to alleviate their burden, the excessive number of tasks exceeds the processing limit of the HAP, causing the queue length to keep increasing.Nevertheless, when compared with the Local-only approach, EEDO not only sustains task queue constancy but also reduces energy consumed.
Collectively, the EEDO performs well in reducing energy consumed while maintaining task queue constancy.

Related work
The computational capacity limitations and battery energy constraints of GDs pose significant challenges in processing high-demand computational tasks.MEC has emerged as an innovative solution to these issues.Through proper offloading of computation tasks and allocation of resources, these issues can be effectively resolved [20].Tang et al. [21] studied the offloading of indivisible and delay-sensitive computational tasks in MEC systems.They designed a distributed offloading algorithm to reduce task drop rate and average latency.Wu et al. [22] focused on task offloading within a decentralized and heterogeneous IoT network, augmented by blockchain technology.They presented an algorithm for real-time decision-making on task offloading to enhance offloading efficiency.Guo et al. [23] investigated the task offloading process within densely populated IoT environments, proposing a cyclical search mechanism to optimize CPU cycle frequency and transmission power.Tang et al. [24] drew on the idea of Intent-based Networking (IBN) and proposed a Service Intent-aware Task Scheduling (SIaTS) framework for CPNs.Nahum et al. [25] proposed an intent-aware reinforcement learning method to perform the RRS function in a RAN slicing scenario.Liao et al. [26] developed a novel task offloading framework for air-ground integrated vehicular edge computing (AGI-VEC) which could enabled a user vehicle to learn the long-term optimal task offloading strategy while satisfying the longterm ultra-reliable low-latency communication.
Moreover, many works have concentrated on partial task offloading.Tong et al. [27] explored minimizing the energy cost of the MEC system in a cooperative scenario.They established an online dynamic computational offloading algorithm to reduce additional overheads.Xia et al. [28] developed a model for a MEC offloading scheme powered by energy collection, employing a collaborative online optimization approach based on the  game and stochastic theories.Hu et al. [29] contemplated the equilibrium between power efficiency and service latency within extensive MEC networks, proposing a dynamic offloading and resource management protocol.
However, when ground communication facilities are damaged or lacking, HAP-assisted aerial MEC networks become an alternative solution.Waqar et al. [14] investigated offloading problems within MEC-augmented vehicular networks, designing a decentralized strategy utilizing reinforcement learning.Wang et al. [30] researched energy efficiency in airborne MEC networks, and put forward an offloading strategy based on collective learning paradigms.Ren et al. [31] investigated caching and computational offloading issues with HAP assistance.They presented an algorithm based on the Lagrangian method.
Although many efforts have been made in the field of offloading and resource allocation, and some studies have also investigated issues related to HAP offloading, the offloading problem in the HAP-assisted MEC scenario is still challenging.This paper studies the task offloading and resource allocation problem in a HAP-assisted MEC system and designs the EEDO algorithm to reduce the consumption of system while considering the randomness of task arrival and the uncertainty of communication quality.

Conclusion
In our work, we study the dynamic multi-user computation offloading and resource allocation problem within a HAP-assisted MEC system.The problem is modeled as a stochastic optimization problem with objectives set on reducing energy consumption of the system, whilst preserving queue stability and adhering to resource constraints.By applying stochastic optimization techniques, we recast the initial stochastic problem into a deterministic problem.This reformulated problem is then strategically split into three distinct subproblems.Then, we design the online EEDO algorithm to solve these three subproblems, which require no prior statistical information on tasks.Our theoretical analysis proves that the EEDO algorithm maintains an equilibrium between energy consumed and queue stability within the system.Then, we conduct experimental analysis.The results from these experiments validate the EEDO algorithm's effectiveness in reducing the system's energy consumed while concurrently maintaining queue stability.
path loss metrics for LoS and NLoS paths.Additionally, ρ LoS i specifies the chance of employing the LoS model for transmissions between GD i and the HAP, where the variables κ 1 , κ 2 , alongside the path loss param- eters η LoS i and η NLoS i , are determined by the prevailing environmental conditions.

Table 1
Notations and definitions Na set of GDs τ the length of a prescribed shorter time slotA i (t)the tasks arrivals for GD i in time slot t D i,l (t) the tasks processed locally by GD i in time slot t D i,o (t) the tasks offloaded to HAP by GD i in time slot t Q i (t) the queue backlog of GD i in time slot t H i (t) the queue backlog of HAP in time slot t f i (t) the CPU frequency cycles of GD i in time slot t ξ i the efficient switched capacitance of GD i l 1 the energy expenditure by the HAP for each bit of data processed

Table 2
Parameter settings