Joint optimization of network selection and task offloading for vehicular edge computing

Taking the mobile edge computing paradigm as an effective supplement to the vehicular networks can enable vehicles to obtain network resources and computing capability nearby, and meet the current large-scale increase in vehicular service requirements. However, the congestion of wireless networks and insufficient computing resources of edge servers caused by the strong mobility of vehicles and the offloading of a large number of tasks make it difficult to provide users with good quality of service. In existing work, the influence of network access point selection on task execution latency was often not considered. In this paper, a pre-allocation algorithm for vehicle tasks is proposed to solve the problem of service interruption caused by vehicle movement and the limited edge coverage. Then, a system model is utilized to comprehensively consider the vehicle movement characteristics, access point resource utilization, and edge server workloads, so as to characterize the overall latency of vehicle task offloading execution. Furthermore, an adaptive task offloading strategy for automatic and efficient network selection, task offloading decisions in vehicular edge computing is implemented. Experimental results show that the proposed method significantly improves the overall task execution performance and reduces the time overhead of task offloading.


Introduction
Edge computing is an open platform which integrates core capabilities of computing, storage, and services at the edge of the network that near the source of sensor data. It provides intelligent services at the edge which meets the key needs of industry digitalization, real-time data streaming, data intelligence, security and privacy protection, etc. Edge computing has obtained great attentions from academia, industry, and government. Mobile edge computing (MEC) offers cloud capabilities and service environment for Internet of Things (IoT) applications at the edge of the mobile cellular network, such as 5G, which is regarded as a promising solution in which computing resources are pushed to the radio access network *Correspondence: btang@hnust.edu.cn 1 School of Computer Science and Engineering, Hunan University of Science and Technology, 411201 Xiangtan, China Full list of author information is available at the end of the article (RAN) and services are provided near the devices [1]. In the MEC-based vehicular network environment, MEC servers with powerful computing and storage capabilities are installed at the edge of the vehicular networks, usually collocated with the roadside units (RSUs). The communications between the vehicles and RSUs is through small cell networks. Each edge server is collocated with an access point (AP) (e.g., 5G base station). Users can require the mobile edge services by selecting a nearby AP. The services for users are deployed at a nearby edge server rather than the remote cloud, so as to reduce the latency from end devices to the cloud-hosted service. Merging MEC with the dense deployment of 5G macro/micro base stations, makes possible a ultra-low latency access to cloud functionalities [2][3][4]. Vehicular networks have gained huge popularity in recent years due to their wide range of applications. Many new types of vehicular services have emerged to provide good travel experience for users, such as autonomous driving, traffic safety and traffic monitoring. These services depend on computing-intensive workloads with time constraint. Although with the continuous advancement of technology, today's vehicles have obtained more capability than before, and many in-vehicle applications can be completed by the vehicle's local computing power. However, there are still some services that the vehicle cannot handle, such as autonomous driving technology, which establishes and recognizes the driving environment of the vehicle through radar, sensing and monitoring equipment on the vehicle, and then realizes the automatic control of the vehicle. Facing the complex and timevarying road network traffic environment, the characteristics of weak processing capacity and low storage capacity of vehicle on-board equipment will greatly restrict the real-time processing and effective storage of large-scale traffic information collected by vehicles, which will seriously affect the safety and reliability of autonomous driving. At the same time, some new applications such as road environment augmented reality (AR), traffic behavior intelligent guidance, voice-based human-vehicle dynamic interaction, etc., usually also require powerful computing capability and the support of massive data content, which are bringing users with rich and convenient driving experience, and also poses severe challenges to the computing and storage capabilities of smart vehicles. The introduction of MEC into vehicular networks is an effective way to solve the above problems. In the MEC-based vehicular networks environment, the computation tasks could be executed locally in the vehicles or offloaded to the nearby MEC servers, according to conditions and constraints [5,6]. However, it is still faced with the following challenges: First, the vehicles have significant movement characteristics, and their geographical positions change with time. If the vehicle runs a large amount of computation tasks, it usually passes through the coverage of multiple cells during task executions. During the movement of the vehicle, the distance between the vehicle and the edge server that serves the vehicle is dynamically changing. If the distance between them becomes too large, the quality of service (QoS) provided cannot be guaranteed, and user experiences will also decrease. Therefore, there is a need for a strategy to dynamically adjust task execution as vehicles moving [6].
Second, the vehicle maybe within the coverage of multiple available APs at the same time in a cell. If the user chooses the AP autonomously, it may cause resource competition and network congestion. Therefore, how to select the appropriate network for each vehicle is also a problem to be considered.
Third, when the vehicle is performing network selection and task offloading, if the AP selected by the vehicle is close to the edge server where the task is executed, the access delay is only occurred by the connection to the AP. On the contrary, the task needs to be transmitted from the AP to the edge server far away, which introduces additional communication overhead. There are not any specific protocols that are used during the execution of the algorithms for the network selection and the task offloading. How to balance the access delay of the network access and the communication delay is a difficult problem to be solved.
In addition, if the data volume of the computation task is large and the coverage of the cell is relatively small, during the offloading process, the vehicle may have drove out of the previous cell before the task has completely offloaded, so that the interrupted task needs to be retransmitted, which increases communication overhead. Therefore, static task offloading strategy cannot solve this problem, and how to plan the task offloading in a dynamic manner is a problem that this paper focuses on.
In this paper, we propose an adaptive task offloading strategy in the MEC-based vehicular networks environment, which considers a scenario where the vehicle needs to pass through multiple cells during the offloading process for a large task. The impact factors we considered in the adaptive computation task offloading strategy include vehicle speed, cell coverage, data transmission rate, access point load in the cell, and MEC server workload. Considering these impact factors, an optimal plan is made for the next offloading strategy, so as to achieve that there will be no interruption of task execution when passing through multiple cells during the offloading process, and also try to achieve the minimized total delay in completing the tasks.
In the offloading strategy proposed in this paper, first, the whole task is divided into many small task units (TUs) [7], and then, based on some restrictions, we obtain the following decisions: the AP to which the task is connected, and the edge server to which the task is offloaded, and the number of TUs allocated to the edge server, and the proportion of the corresponding number of TUs offloaded to the edge server to the total TUs allocated to the cell. It is an online strategy that whenever a vehicle enters a new cell, the above offloading strategy will be re-executed to obtain an optimal computation task offloading scheme for the new cell. Therefore, the strategy proposed can achieve a lower delay for task execution without interrupting the computation process.
The main contributions of this paper are summarized as follows: • We proposed a pre-allocation algorithm for vehicle tasks. It can comprehensively consider vehicle movement characteristics and the surrounding environment of the vehicle, and dynamically adjust the execution and offloading of tasks. • We proposed the optimization of the selection of network access points, which reduces network congestion and resource competition caused by vehicles choosing network access points independently. • We designed and implemented an adaptive offloading strategy, which can provide vehicles with automatic and efficient network access selection, task offloading and task migration decisions.
The rest of this paper is organized as follows. The second section summarizes related work. Then, we introduce our system model and solution, and proposed a pre-allocation algorithm and an adaptive offloading strategy, followed by simulation and result analysis. The final section summarizes the whole paper and discusses future work.

Related work
In recent years, the edge computing paradigm has attracted great attentions from academia and industry. It has the characteristics of fast processing speed and short response time, which could make cloud computing services closer to end users [8]. Edge computing has provided a powerful driving force for many key technologies such as 5G, IoT, AR, and vehicle-to-vehicle (V2V) communication, etc. There are three common edge computing models: Cloudlets [9], fog computing [10], and multiaccess/mobile edge computing [11]. MEC, as a new architecture that moves service capabilities from the core network to the edge network, has caused extensive research by scholars [6,[12][13][14].
The key research topics in MEC include the placement of edge servers, computing migration and offloading, and edge caching, etc. Computation migration and offloading are about migration decisions and resource allocation. By migrating mobile device tasks/applications to servers in network for execution, it can enhance the computing capabilities of mobile devices and reduce time and energy consumption when running applications on mobile devices [1]. In recent years, several studies have addressed the mobile task offloading in the MEC scenario. The task offloading can be classified into several aspects according to optimization objectives, includes: delay or latency constrained offloading [15,16], energy-efficient offloading [17,18], energy-latency tradeoff for offloading [19,20], cost-efficient offloading [21], etc. Generally speaking, the offloading is a multi-objective optimization problem, which is usually solved by optimized or heuristic algorithms.
Task partitioning and task division are usually adopted in offloading. In [22], Wu et al. proposed a path-based offloading partitioning algorithm to determine which portions of the application tasks to run on mobile devices and which portions on cloud servers with different cost models in mobile environments. In [23], Kiani and Ansari proposed a task scheduling scheme designed for code partitioning over time and the hierarchical cloudlets in a mobile edge network. Similar work includes [24], which proposed a partial offloading technique for wireless mobile cloud computing. In [7], Wang et al. also divided the whole task into several small task units, taking into account the divisibility of task, and proposed a dynamic offloading in MEC-enabled vehicular networks, which is similar to our work. Compared to [7], our work considers multi-severs and APs for the decision, which have not mentioned by them. However, most of the existing work have not considered the practical constraints about the variable moving speed of vehicles.
Some studies about offloading and migration are focused on user mobility predictions in mobile edge networks. For example, the work in [25] is to formulate the mobility driven decision making problem for service migration using the framework of Markov Decision Process (MDP). By using the MDP model to predict, the work [25] makes decisions whether to migrate services. In [26], Alasmari et al. also proposed a MDP-based methodology to intelligently make decision for optimizing multiple objectives.
In [27], Sun et al. developed an energy-aware mobility management scheme to optimize the total delay due to both communication delay and computation delay under the long-term energy consumption constraint of the user, without requiring the future user mobility as a priory knowledge. In [28], Gao et al. proposed joint network selection and service placement for mobile edge computing. The authors considered the nonlinear network access latency, switching latency, and communication latency to minimize overall latency, and designed an online algorithm to reduce frequent switching cost and balance the access delay and communication delay. In [15], a contractbased offloading and computation resource allocation scheme was proposed to maximize the benefit of the MEC service provider with consideration on the vehicle mobility in cloud-enabled vehicular networks.
Resource sharing of access networks and edge servers are important issues recently studied, for example the graph-based cooperative scheduling proposed in [29], the matrix game approach proposed in [30], P2P-enabled decentralized edge servers approach in [21], etc. In [31], Sardellitti et al. proposed and solved the offloading problem of joint optimization of the radio resources and the computational resources in order to minimize the overall users' energy consumption, while meeting latency constraints. In [32], the authors proposed an adaptive sequential offloading game approach, where the mobile users sequentially make offloading decisions based on the current interference environment and available computation resources, and adjust the number of offloaded users adaptively. Poularakis et al. studied the joint optimization of service placement and request routing in MECenabled multi-cell networks with storage-computationcommunication constraints [33]. Data security and privacy protection in the field of edge computing have also attracted the attention of many scholars [34,35]. The integration of blockchain and edge computing is becoming an important concept that leverages their decentralized management and distributed service to meet the security, privacy protection, scalability and performance requirements in future networks and systems [36]. In [37], Gai et al. proposed a model permissioned blockchain edge model for smart grid network (PBEM-SGN) to address the two significant issues in smart grid, privacy protections, and energy security, by means of combining blockchain and edge computing techniques. In [38], the authors exploited consortium blockchain and smart contract technologies to achieve secure data storage and sharing in vehicular edge networks and proposed a reputation-based data sharing scheme to ensure high-quality data sharing among vehicles. In [39], physical-layer assisted privacy-preserving offloading schemes was proposed and two efficient algorithms are developed to address the corresponding optimization problems by exploiting the favorable structure of the privacy-preserving offloading problem in the delay optimal and the energy optimal scenarios. In [40], a hierarchical blockchain-enabled federated learning algorithm for knowledge sharing is proposed in IoVs. The hierarchical blockchain framework is able to not only improve the reliability and security of knowledge sharing, but also adapt to the large scale vehicular networks with various regional characteristics.
Compared with these work, the difference is that we consider a multi-cell MEC scenario, where the small cells are densely deployed and serve multiple mobile vehicles. We consider multi-server and multi-AP in multi-cell for the decision, and also consider the mobility of vehicles. The novelty in our offloading strategy is that, each access point is equipped with an edge server, multiple access points are included in a cell, and we study the joint optimization of access points selection and task offloading to decrease queuing delay and task execution with task division, due to the competition for limited computation resources.

System model and solution
As you can see, Fig. 1 shows the dynamic task offloading process of vehicles in the vehicular networks based on MEC. There are multiple available network access points around the vehicle. At the initial time, the vehicle is located in the upper-right corner, the TUs are offloaded to the edge server named Edge 1. Then, the vehicle moves to the next position along the red arrow. When this movement occurs, the coverage of the surrounding edge server will change. If the distance between the vehicle and the edge server is too large, the QoS provided will not be guaranteed, and the user experience will also decline. Therefore, it needs to offload the unfinished TUs to the new edge server to execute. As shown in this figure, Edge 4, Edge 5 and Edge 6 are three candidates, and finally the unfinished TUs are offloaded to the edge server named Edge 4 according to a decision.  Table 1. We assume that the cells are closely adjacent, and their coverage does not overlap each other, and it is also assumed that the communication coverage of the cell is relatively small, and its coverage is about 100m to 400m or less. The coverage radius of each cell is represented by a set r = {r 1 , r 2 , ..., r s , ...}. There are multiple APs and edge servers with powerful computing and storage capabilities in each cell. It is assumed that there are m APs and n edge servers in the coverage area of the cell L s cell , which are denoted by M = {1, 2, 3, ..., m} and N = {1, 2, 3, ..., n}, respectively. An edge server in the cell can serve multiple vehicles at the same time through CPU sharing, but the computing resources allocated to each vehicle are limited. As can be seen in Fig. 3, the objective of optimization decision is to optimally allocate tasks to m APs and n edge servers.
We suppose that the on-board device in the vehicle has a large computing tasks to be finished when it is moving on the road. Due to the heterogeneity of computing tasks, we denote the task for vehicle k as u k , and denote the computation input data bits of task u k as s k , and w k represents the CPU cycles required to process the task u k , which can be calculated through w k = ωs k , where the parameter ω depends on the computational complexity of the task u k . We denote the computation capacity of on-board device as f k l . In addition, C j is denoted as the total computing power of edge server j, and f j m denotes the computation capacity allocated to the vehicle by edge server j. For the task u k , the execution time on local device is expressed by w k /f k l , and the execution time on edge server j is w k /f j m .
Taking into account the divisibility of task, we can divide the whole task into several small TUs. By dividing the task into small TUs, it can be accurately controlled according to the vehicle's speed and wireless network status. We can make a decision on each TU whether it should be processed locally or offloaded to the current-connected edge server. Even if there is an interruption during the task execution, there is no need to retransmit the whole computing task, and only the interrupted TU needs to be retransmitted. We assume that the size of each TU equals to I o bits. The total number of TUs of task u k is ns k , which is calculated by where is the ceil function.
As the vehicle passes through several cells, the whole task would be finished by several edge servers in successive cells. Thus, how many TUs should be offloaded to edge servers in each cell is the scheduling problem solved in this paper. The maximum number of TUs that can be completed in the cell is defined as N s k , and then the amount of data that need to be processed in the cell is given by: In addition, for the maximum number of TUs N s k , we need to calculate the number of TUs should be assigned and processed in edge servers according to the conditions, and the number of TUs processed in local. We define the optimal offloading ratio α s k which indicates the ratio of the number of TUs offloaded by vehicle k to the maximum number of TUs N s k in the cell L s cell .

Access point selection
When the vehicle is moving in a cell, it may be within the coverage of multiple available APs. If the vehicle chooses AP autonomously, it may cause resource competition and network congestion. Therefore, how to select the appropriate AP for each vehicle is a problem we need to consider. The vehicle needs to select an AP from the surrounding candidates to transmit tasks. Assuming that the maximum uplink transmission rate R max is limited by the bandwidth of AP. Thus, the maximal uplink rate assigned to the vehicle k through AP i is denoted as r i,k which can also be derived from Shannon's formula: where h i,k represents the channel gain between the vehicle k and AP i, p k represents the transmission power of vehicle k, B represents the channel bandwidth, and σ 2 represents the transmission noise. Assuming that the maximum transmission rate remaining in the AP i is R i , if the vehicle k needs to offload task to the edge server through the AP i, r i,k ≤ R i should be satisfied. When the remaining transmission rate of AP is insufficient, we need to wait for the completion of the previous transmission to release bandwidth resources to accommodate new task transmission. We assume that the waiting time of the vehicle k due to insufficient bandwidth of AP is denoted as T AP i,k .

Task offloading
In our model, the computing task of the vehicle does not have to be offloaded to the edge server close to AP. This will reduce the load on some hot edge servers to achieve load balancing of the whole system. At the same time, it also helps to reduce the queuing delay of computing tasks executed on the edge server and improve QoS. Correspondingly, an additional communication delay is introduced due to the distance factor. We use T Q j,k to denote the queuing delay of the task execution on edge server j, that is, the sum of the estimated execution time of all tasks in the queue of edge server j and the execution time of the task. T C i,j indicates the communication delay from AP i to edge server j, which is given by: where dis i,j represents the distance between AP i and edge server j. If the computing task of the vehicle is offloaded to the edge server near the connected AP, we have T C i,j = 0.

Pre-allocation algorithm in cell
Because the data size of task u k is large and the coverage of the cell is relatively small, the whole task is separated into where v k s represents the speed of the vehicle k in the cell. Based on the time the vehicle stays in the cell, we can calculate the maximum number of TUs processed in local through: The number of TUs that the vehicle offloads to the edge server for processing is also limited, and N max,s off ,k stands for the maximal number of TUs processed in the cell. The value of N max,s off ,k depends on the time staying in the cell, the channel conditions in the cell and the computing capacity of edge servers. We use the average uplink rate of all APs in the cell to represent the data transmission rate in the cell: The average computing capacity of all edge servers in the cell is calculated by Eq. (8), and the average value is used to process offloaded tasks in the cell. In order to ensure that there is no interruption during the task execution, the total time spend for the offloading task execution in edge servers must not exceed the maximum time that the vehicle stays in the cell, then we have Therefore, the total number of TUs that can be calculated at most in the cell includes the maximum number of TUs that the on-board device can perform (denoted as N max,s loc,k ) and the maximum number of TUs that offloaded to the edge server for processing (denoted as N max,s off ,k ), which is expressed by At the same time, in order to adapt to the environment and system changes, we set a task adjustment factor θ (θ ∈ [ 0, 1]), which is inversely proportional to the quality of the wireless channel and the workload in the cell. The worse the wireless channel quality in the cell, or the heavier the traffic, or the larger the system workload, the smaller the value of θ. Therefore, we obtain N s k = θ s N max,s k (11) where is the floor function. Thus, the pre-allocation of tasks for each cell can be obtained, which is expressed as

Adaptive task offloading strategy
The proposed adaptive task offloading consists of two stages. In the first stage, the number of TUs N s k assigned to the cell that needs to be completed has been solved. The  Number of edge servers in a cell [2,5] Computing power of an edge server (MIPS) [20,60] Computing power allocated to a vehicle (MIPS) [10,15] On-board computing power of vehicle (MIPS) [1,5]  Number of access point in a cell [2,5] next stage aims to solve the proportion of task offloaded to edge servers for execution in the cell. If a vehicle offloads tasks to a nearby edge server, the time cost consists of four parts: 1) the time for the vehicle to establish a wireless connection with an AP in the cell and upload the required data for the task to be processed, 2) the communication delay between the AP and the selected edge server, 3) the time the vehicle waits for the edge server to complete task queue, 4) the time when the result data is transferred back to the vehicle.
We define the decision variable named offloading ratio which is denoted as α s k ∈[ 0, 1], which means the proportion of the number of TUs offloaded to the edge server to the total number of TUs that need to be completed in the cell. We easily know that when α s k = 0, TUs are all processed in the vehicle k locally; while when α s k = 1, TUs that the vehicle k needed to complete in the cell are all offloaded to the edge server.
Since the maximal uplink rate assigned to the vehicle k through AP i in the cell is r i,k , the transmission latency of Fig. 4 The relationship between the vehicle's speed and the number of pre-allocated TUs in each cell all offloading tasks that the vehicle k needs to complete in the cell can be given by: We also define an AP selection decision variable x i,k = {0, 1}, and x i,k = 1 indicates that the vehicle k is connected to the AP i for data transmission; otherwise, x i,k = 0 means there is no connection between them. Therefore, the AP selection strategy can be expressed as If the accessing network is congested and there is a long queue for AP to process, the AP waiting latency will occur, which is denoted as T AP i,k . After the edge server completes the task, it will return the result to the corresponding vehicle. Generally speaking, the data volume of the result is very small, so the transmission time of the result from the edge server to the vehicle can be ignored.
The time for processing the TUs locally in the vehicle k can be expressed as follows: The time for processing the TUs offloaded by the vehicle k to the edge server j while moving in the cell can be expressed as follows: We define edge server selection decision variables y j,k = {0, 1}, and y j,k = 0 indicates that TUs of the vehicle k are not offloaded to edge server j; otherwise, y j,k = 1 means TUs are offloaded to edge server j. So, the edge server selection strategy can be expressed as y k = (y i,k , y 2,k , ..., y n,k ) with the restriction of j∈N y j,k = 1, ∀j ∈

N.
Therefore, considering the queuing latency of the AP, the queuing latency of the edge server, the communication latency between the edge server and the AP, the offloading time to edge server j through AP i, and the execution time in edge server j, the total latency can be given by: As mentioned above, the total latency of completing the allocated computing tasks in the cell is denoted as T s k , which can be calculated by: Fig. 6 The relationship between average latency to complete the pre-allocated tasks and task offloading ratio Thus, the latency-minimization problem can be formulated as: By solving the optimization problem, we can derive the optimal AP, the optimal offloading edge server, the optimal offloading ratio, and determine how to offload TUs in each cell to avoid the interruption. Then, the minimum time required to complete the whole task is given by:

Solution
Now, we present the solution to the above optimization problems. It can be seen from Eq. (17) that the problem is a mixed integer programming (MIP) problem, which is to solve AP selection, task offloading, and task offloading ratio. In the solving process, we assume that the AP selection strategy is x k = x * k and the task offloading strategy is y k = y * k , then the original problem becomes a convex optimization problem about α s k , and the optimization could be transformed into a function as follows: We define  Then, we have It can be seen from Eq. (22) that g α s k and u α s k are linear functions of α s k . g α s k is a monotone increasing function of α s k , and u α s k is a monotone decreasing function of α s k . Thus, we can easily find that the optimal task offloading ratio α s,bt k to minimize task execution time, when the AP selection strategy is x k = x * k and the task offloading strategy is y k = y * k . Since the number of APs and edge servers in a cell is limited, we can traversal all the combinations of APs and edge servers to obtain the optimal task offloading ratio in this case. Finally, through the comparison of different combinations, the optimal solution is obtained.

Simulation and result analysis
In this section, we introduce simulation scenarios, including parameter settings. Then, we analyze the impact of several important parameters and discuss the performance of the proposed scheduling scheme through simulation results.

Simulation scenarios
The simulation experiment in this paper is conducted using an edge scheduler written in Java, which can simulate the vehicle entering a series of closely adjunct cells (the coverage of the cells does not overlap each other). During the experiment, we assume vehicles entering 7 successive cells L 1 cell , L 2 cell , ...L 7 cell , and the diameter coverage is 100m, 120m, 150m, 230m, 200m, 250m, 310m, respectively. The number of arriving vehicles is [1,40], and the speed range is [12,34]m/s. Then, we perform the task pre-allocation algorithm, comprehensively considering the vehicle speed, the range of each cell, the communication capability of the network access points and the computing power of the edge servers in the cell, etc., to predict the amount of tasks that the vehicle can execute in each cell. After the vehicle enters each cell, the adaptive offloading strategy is invoked, considering the load status of each network access point and MEC server in the current cell, to find the optimal network access point, offloading edge server and the optimal offloading ratio. Simulation experiment parameters are detailed in the following Table 2. The value for these parameters (except the size of task unit) is a random value in an interval. The computing power of edge server and vehicle is measured in million instructions per second (MIPS). Figure 4 shows the relationship between the vehicle's speed and the number of pre-allocated TUs in three cells L 1 cell , L 2 cell , L 3 cell in the head. The number of pre-allocated TUs in different cells is different, because the communication and computing capacity in each cell is different. It can also be seen from Fig. 4 that as the vehicle speed increases, the number of pre-allocated TUs in each cell decrease accordingly. This is because the increase in vehicle speed will reduce the time vehicles stay in the cell, which will cause task offloading time becomes smaller. Therefore, the number of pre-allocated TUs in each cell is not static, it changes dynamically with the change of vehicle speed. Figure 5 shows the relationship between different number of arriving vehicles and the average latency of completing the pre-allocated TUs in three cells L 1 cell , L 2 cell , L 3 cell in the head. The average data size of the vehicle task S k is set to 150MB, and the average vehicle speed is 20m/s. In the process of task offloading execution, the vehicle may pass through multiple cells. It can be seen in Fig. 5 that as traffic becomes heavy, the latency of completing preallocated TUs in the cell also increases. This is because with the increase in the number of vehicles entering the cells, the pressure of the APs and edge servers in the cell is also increasing, resulting in the delay of task offloading and the task queue in edge servers, which increases the total task competing latency. Figure 6 shows when the average vehicle speed is 20m/s, the average vehicle task size S k is 150MB, and the vehicle passes through three cells L 1 cell , L 2 cell , L 3 cell , the task execution latency in the three cells in the case of different task offloading ratio. Figure 7 shows the relationship between average task execution delay and task offloading ratio under different vehicle congestion conditions when the average vehicle speed is 20m/s and the average vehicle task size S k is 150MB. Through comparison we found our adaptive method is superior to all other schemes (offloading ratio is 0%, 25%, 50%, 75%, 100%), and can determine the optimal task offloading ratio to achieve minimizing latency.

Result of performance evaluation
In order to validate the performance of the adaptive task offloading strategy proposed in this paper, we compare our proposed strategy (M3 strategy) with the other two strategies: M1 strategy and M2 strategy.
(1) M1 strategy: After task pre-allocation in the cell has been completed, when the vehicle selects the AP in the cell, the AP with the smallest waiting latency is selected to connect, and the task is offloaded to the nearby edge server, without task migrations.
(2) M2 strategy: After task pre-allocation in the cell has been completed, when the vehicle selects the edge server for offloading, the edge server with the smallest queuing latency is selected for offloading, and the AP near the selected edge server is connected.
(3) M3 strategy: our proposed adaptive task offloading strategy. Figure 8 shows the performance comparison of three different offloading strategies (the adaptive offloading strategy proposed in this paper and M1/M2 strategy) under different traffic situations. Figure 9 represents performance comparison under different vehicle task data sizes. Comparison results indicate that our proposed adaptive offloading strategy outperforms others. Figure 10 presents the impact of different offloading strategies on the average task completion time under different vehicle speeds, when the number of arriving vehicles is 20 and the average vehicle task size S k is 150MB. Through observation we can find that the strategy we proposed brings less task execution delay than others. At the same time, we can see in the figure that as the vehicle speed increases, the average task completion delay experiences a process of falling first and then rising. This is because if the vehicle speed is slow, the vehicle stays in a cell for a long time, which will cause a certain load pressure on the edge server in the cell. When the vehicle speed is relatively fast, the vehicle is traveling between different cells, which will cause frequent pre-assigned task uploads and a long waiting time.

Conclusion
This paper discusses and studies the problem of task offloading in vehicular edge computing environment. In order to solve the problem of service interruption and low QoS caused by the strong mobility of vehicles, a TUs pre-allocation algorithm in the cell has been proposed. In existing work, the influence of network access point selection on task execution latency has been often ignored. Since the access network and edge servers are often to be overloaded, the commonly used task offloading method cannot guarantee the user's QoS. In this paper, we study the joint optimization of network selection and task offloading, and propose an adaptive task offloading strategy. The simulation results have proved that the proposed adaptive offloading strategy has a good performance improvement in terms of task latency and response performance of the system. In this paper, the scenario we consider is a one-way straight road with no intersection and slow speed changes, but the actual road scene is very complicated. The vehicle may also accelerate, decelerate, stop, etc. In future work, we will consider more complex road and vehicle movements, and establish a more accurate system model, so that our proposed pre-allocation algorithm and adaptive offloading strategy can adapt to more complex road environments.