Skip to main content

Advances, Systems and Applications

Journal of Cloud Computing Cover Image

An efficient task offloading scheme in vehicular edge computing


Vehicular edge computing (VEC) is a promising paradigm to offload resource-intensive tasks at the network edge. Owing to time-sensitive and computation-intensive vehicular applications and high mobility scenarios, cost-efficient task offloading in the vehicular environment is still a challenging problem. In this paper, we study the partial task offloading problem in vehicular edge computing in an urban scenario. Where the vehicle computes some part of a task locally, and offload the remaining task to a nearby vehicle and to VEC server subject to the maximum tolerable delay and vehicle’s stay time. To make it cost-efficient, including the cost of the required communication and computing resources, we consider to fully exploit the vehicular available resources. We estimate the transmission rates for the vehicle to vehicle and vehicle to infrastructure communication based on practical assumptions. Moreover, we present a mobility-aware partial task offloading algorithm, taking into account the task allocation ratio among the three parts given by the communication environment conditions. Simulation results validate the efficient performance of the proposed scheme that not only enhances the exploitation of vehicular computation resources but also minimizes the overall system cost in comparison to baseline schemes.


As an enabling technology for the Internet of Vehicles, Vehicular edge computing (VEC) provides possible solutions to share the computation capabilities between vehicles. The continuous increase in mobile applications has caused an exponential growth in demand for high computational capability in wireless networks [1]. Vehicles are equipped with computing and storage resources to support an intelligent transport system and a wide variety of onboard infotainment services. It is predicted that by 2022, every self-driving car will have the computing capability to execute up to 106 dhrystone million instructions per second [2], which is ten times that of the existing laptops. However, vehicle’s data demands and computation requirements are also increasing day by day caused by innovative safety and non-safety applications, i.e., augmented reality, virtual reality, and immersive & real-time interactive applications. To cope with all such evolving communication and computation demands of vehicular systems, mobile devices, and pedestrians, this is where the VEC system came into existence [3, 4]. Vehicular and infrastructural nodes, i.e., the roadside units (RSUs), can make their communication and computational resources available to the network.

In contrast to cloud infrastructure that may induce considerable overhead in terms of delay between the cloud and vehicles to offload the computation task [5, 6]. Since cloud resources are usually deployed at the remote end [7]. VEC solves this problem, as it brings these computational capabilities in close proximity and enables numerous vehicles to process their required tasks at the network edge [8]. The network latency can be minimized to a great extent while accessing edge computing resources by the proximity of mobile vehicles. This enables VEC to offer a prompt interactive response in the computation offloading service by deploying computing nodes or servers, to fulfill users’ demands for delay-sensitive tasks [9]. Computation offloading is a promising way of transferring some computation-intensive activities to nearby servers [10, 11]. Moreover, VEC takes advantage of proximal vehicles, to reduce the edge servers’ load. Hence, vehicles can also perform computational tasks for VEC servers.

In addition, vehicles’ collaboration is enabled via infrastructure or infrastructure-independent communication using dedicated short-range communication (DSRC) or cellular vehicle to everything technology, i.e., vehicle to vehicle (V2V) communication. V2V communication facilitates both safety and non-safety applications [12]. In V2V communication, the vehicles share data that can define the internal state and environment of the vehicle to widen the perceptual horizon of the communication partner. Any relevant information gathered from vehicular onboard sensors can be forwarded to nearby vehicles [13]. Apart from sharing information, nearby vehicles can help in processing high computational tasks, which cannot be tackled alone by a computational resource-limited vehicle. The vehicles communicate directly with each other if they are within their communication range. Among the various scenarios, mobility and computation offloading that are mainly followed within the bound of the network edge, are based on the internet of vehicles [14]. However, still, vehicles’ computation capability is limited to fully handle the existing and emerging low latency applications’ computational demands.

Related work

VEC is becoming an eminent trend. Many researchers have contributed to figure out the challenges that VEC poses [15, 16]. For offloading the computation tasks to the network edge, the VEC emerged as a new paradigm to lighten the computation load of vehicles with limited resources and satisfies real-time responses to vehicles’ task requests [17]. In [18], the authors presented a contract-based mechanism for the allocation of resources by exploiting mobile edge computing (MEC) servers’ resources and fulfilling the offloading requisites of the tasks.

However, the computation and storage capacity of edge computing is still inadequate. Therefore, some hybrid schemes have been proposed, which integrate the advantages of both edge computing and vehicular networks. For instance, Hou et al. [16], analyzed the use of both moving and parked vehicles as computation and communication platforms to improve service quality. Ye et al. [19] presented a scheme to offload mobile devices and cloudlets to fog enabled bused at low energy and transmission costs. While Feng et al. [20] put forth a hybrid cloud computing infrastructure in vehicular networks, where tasks are offloaded to other vehicles in the vicinity or to the RSUs. Similarly, the authors in [21] opted for a Stackelberg game-theoretic scheme to develop a multilevel offloading framework and presented a hierarchically organized cloud-based VEC offloading scheme, in which a backup computing neighborhood is there to support the computing resources of MEC servers. Different from [16, 20], and [21], Lai et al. [22], proposed a three-tier vehicular network that includes the cloud layer, the fog layer, and the network layer. The authors developed cooperation and scheduling schemes to manage the vehicle nodes. Ren et al. [23], developed a partial compression offloading framework, where a small part of the data is computed locally on the vehicle while the remaining part of it is computed on the MEC server. This allows the exploitation of local and MEC computation resources efficiently. With the integration of V2V and vehicle to infrastructure (V2I) communication, the authors in [24] came up with a framework to offload a load of vehicles with a low signal-to-noise-and-interference ratio to be served by other vehicles with a greater quality link. The authors in [25] and [26], presented a predictive combination-mode and load-aware MEC offloading schemes, respectively, in which tasks are offloaded via V2V relay transmission to the MEC server or V2I uploading.

Bozorgchenani et al. [27], analyzed a partial offloading method. Where the amount of the task to be offloaded is estimated for reducing the outage probability considering the vehicles’ mobility in an urban environment. The entire process of task offloading must be less than the stay time of the vehicles. However, we consider offloading the amount of task to a nearby vehicle, which can be transmitted within the stay time by meeting the maximum tolerable delay of a task. This will help to utilize the available resources of the vehicles while dividing the task accordingly and sharing the burden of VEC sever. In [28], the authors focused on the federated offloading in vehicular networks to reduce the total latency. The computation task is divided into three parts, i.e., to compute locally send to nearby vehicles, and on the VEC server. The authors assign the computing ratios to ensure and keep the whole task computed within the given latency deadline. However, this scheme does not fully use the vehicular resources as it prefers assigning most of the task part on the VEC server. Whereas, we proposed to exploit the vehicle resources as well as ease the load of the VEC server.

In this paper, we focus on task offloading leveraging V2V (among vehicles) and V2I (between vehicle and VEC servers) communication. Since both the vehicles and VEC servers, are equipped with computation resources, therefore, considering such a hybrid network in a dense urban scenario can further boost the network’s communication and computing capacity. As we have considered partial task offloading, therefore, part of the computation is handled locally, while the remaining task is offloaded to nearby vehicles and the VEC server. We proposed a mobility-aware partial task offloading approach enabling the cost-efficient system. In our proposed scheme, we consider two types of vehicles, i.e., resource-hungry vehicles (RHV) and resource-rich vehicles (RRVs). As their name implies, RHV always tries to offload its task as having limited computing resources, while RRV has abundant resources and helps RHVs in computation. It is pertinent to mention here that the task offloading decisions are determined by each RHV. Multiple RRVs might be available to process each task of RHV. Thus, our proposed scheme helps in offloading the VEC server burden by exploiting the underutilized resources of RRVs. Considering the vehicular network dynamic nature, the wireless channel condition and network topology change hastily due to the incessant vehicles’ mobility [29]. Furthermore, the computation workloads of available RRVs vary over time. Thus, taking into account the above factors, we then present a partial task offloading algorithm in which, apart from local computation, RHVs’ assign priority to RRVs’ selection while making a decision; and RRVs are assigned the maximum portion of a task as it comprises fewer communication and computation costs. This scheme fully utilizes the computation resources of RRVs and reduces the overall burden of the VEC server. Hence, the overall system’s cost will also be minimized.

Motivation and contributions

Many works have been done on task offloading. The following schemes are limited in several aspects:

  • In [19] the scheme is centralized where mobile users first send their tasks to RSUs, then RSU decides to compute according to its load or gives the task to resourceful vehicles for computation.

  • The works [18, 19, 21, 25, 26] considered binary offloading, which do not guarantee the full utilization of vehicles’ computational resources.

  • The studies [25] and [26] are using vehicular communication resources but eventually put the computation load on the MEC server.

  • In [27], the allocated portion of the task for the nearby vehicle depends upon the stay time of the vehicle to reduce the outage probability.

  • In [28], the scheme does not fully utilize vehicular resources as it prefers the MEC server.

Our proposed work aims to fill the above-mentioned gaps. More specifically, the main contributions of this paper are listed below:

  1. 1.

    We model a task offloading scheme to minimize the overall offloading cost. This model is utilized to create a realistic vehicular environment to study the task offloading problem in a large-scale network. Where the task is computed partially at the source vehicle, and the maximum part of the remaining task is offloaded and computed first at the proximity vehicles and then at the relevant VEC server. This enables not only the exploitation of abundant vehicles’ resources and reduction in the overburdened VEC server’s load but also slashes the overall system cost.

  2. 2.

    We propose a mobility-aware partial task offloading algorithm in the VEC scenario. This allows each vehicle to select its nearby vehicles based on the best available resources with minimum cost. Moreover, we consider practical assumptions and estimate the transmission rates for V2V and V2I communication. Based on that the proportion of a task to be computed locally, on a nearby vehicle, and at the VEC, is calculated conditional to the maximum tolerable delay and vehicle’s stay time.

  3. 3.

    We evaluate the influence of different parameters and vehicular environments on our mobility-aware partial task offloading scheme by comparing it with different strategies. We use extensive simulations to validate the effectiveness of our proposed solution.

The remaining parts of this paper are organized as follows. The “System model” section holds the system model and the problem’s formal definition is discussed in the “Problem formulation” section. The proposed algorithm is presented in the “Mobility-Aware partial (MAP) task offloading algorithm” section. In “Results and discussions” section the implementation and evaluation of our MAP algorithm are presented. Finally, the “Conclusion” section concludes this paper.

System model

In this section, we first describe the network topology followed by the communication model’s description. Then we present the computation model. All the notations used in the system model are listed in Table 1.

Table 1 Frequently Used Notations

Network topology

Figure 1 shows our proposed mobility-aware partial task offloading in VEC. A unidirectional road is considered, where the RSUs are installed along the road like a typical case for vehicular networks. A VEC server is also installed with each RSU. We refer the vertical distance from the RSU to the road by e. Each RSU unit has a communication range, i.e., a radius of 200 meters. The set of vehicles having the task to be offloaded is defined as N={1,,n}. Considering the heterogeneity of vehicles, each vehicle has a distinct set of computational resources. Vehicles can offload their tasks to the RSUs. Additionally, if a complex computation task is handed over to RSU, then it will be computed on the VEC server. A central controller is installed in the network that monitors and manages the RSUs [18]. Many vehicles traverse under the coverage of each RSU, which we classify into two categories, i.e., RHV and RRV. As its name implies, RHV represents the vehicle that has a computation task to offload. Since the coverage radius of RSU r and the vertical distance from the RSU to road e is known, we can easily find the distance of vehicles traveling within the RSU coverage as:

$$ s_{n}=2\sqrt{r^{2}-e^{2}}. $$
Fig. 1

Task offloading scenario in Vehicular Edge Computing

Accordingly, the stay time of the vehicle within RSU coverage is derived as:

$$ t^{V2I}_{n,stay}=\frac{s_{n}}{v_{n}}, $$

where the parameter vn denotes the speed of vehicle Vn.

Communication model

The communication model comprises of V2V communication and V2I communication, which are discussed as follows.

V2V communication

In V2V communication, the vehicles interact with each other according to the standard of DSRC [30]. The maximum communication range of V2V is expressed as Climit. We choose an identically and independent distribution channel among vehicles. The path loss of V2V communication is determined as [24]:

$$ L^{V2V}_{n}= 10^{- \frac{63.3 + 17.7log_{10}(d_{n,i})}{10}}, $$

where the dn,i defines the distance between Vn and Vi. It must satisfy the condition 0≤dn,iClimit.

In our scenario, the vehicles’ speed may not be the same. Therefore, vehicles have a relative speed between them. We denote the speed between the Vn and Vi as vn,i. The vehicle’s bandwidth is specified as BV2V and orthogonal frequency is usually chosen for V2V communication. Accordingly, the transmission rate between vehicle Vn and Vi is computed as:

$$ R^{n,i}_{V2V}= B_{V2V}log_{2}\left(1+\frac{P_{t}L^{V2V}_{n}|h^{2}|}{N_{0}}\right). $$

Moreover, we need to evaluate the duration of time for which the vehicle Vn stays within the coverage of the vehicle Vi to avoid offloading failure when the vehicle Vi is not within the coverage area. The rest of the distance before leaving the coverage of the vehicle Vi at time t, can be expressed as follow [27].

$$\begin{array}{*{20}l} d_{n,i}(t)= &\sqrt{r^{2}_{i}- (x_{i}(t)-x_{n}(t))^{2}} \pm (y_{i}(t)-y_{n}(t)), \end{array} $$

where { xn(t),yn(t)} and { (xi(t),(yi(t)} are the position of the vehicle Vn and vehicle Vi, respectively, at time t. While ri is the radius of the vehicle Vi coverage area. Accordingly, the time that the vehicle Vn remains in the coverage area of the vehicle Vi can be defined as:

$$ t^{V2V}_{n,stay}= \frac{d_{n,i}(t)}{|\overrightarrow \upsilon_{n}-\overrightarrow \upsilon_{i}|}, $$

where \(|\overrightarrow \upsilon _{n}-\overrightarrow \upsilon _{i}|\) is the vector speeds of the vehicles Vn and Vi in view of their relative direction. Thus, the uplink rate \(R^{n,i}_{V2V}\) changes with time that can be defined as \(R^{n,i}_{V2V}(t)\). Therefore, the average uplink rate between vehicle Vn and Vi is given as:

$$ \overline{ R^{n,i}_{V2V}}= \frac{\int^{t^{V2V}_{n,stay}}_{0} R^{n,i}_{V2V}(t){dt}}{t^{V2V}_{n,stay}}. $$

V2I communication

Unlike the V2V communication that uses DSRC technology, we leverage LTE-A for V2I communication between vehicles and RSU [30]. The parameter dn,rsu is the distance between vehicle Vn and the center of coverage of the RSU. The path loss of the vehicle Vn and its proximal RSU can be represented as \(d^{-\sigma }_{n,rsu}\) and the white Gaussian noise power as N0. The factor σ is the path loss exponent [31]. Furthermore, the uplink channel is modeled as the Rayleigh fading channel denoted as h [28]. Hence, the uplink data rate is defined as:

$$ R_{V2I}= B_{V2I}log_{2}\left(1+\frac{P_{t}d^{-\sigma}_{n,rsu}|h^{2}|}{N_{0}}\right), $$

where the parameter BV2I represents the uplink channel bandwidth, and Pt denotes the transmission power of the vehicle’s onboard device.

In our scenario, vehicles travel at a constant speed. Therefore, the distance dn,rsu varies with time, which is given by

$$ d_{n,rsu}(t)= \sqrt{e^{2}+\left(\frac{s_{n}}{2} - v^{abs}_{n}t\right)^{2}}, $$

where \(v^{abs}_{n}\) is the speed of the vehicle Vn. Accordingly the uplink rate RV2I varies with time as well that can be define as RV2I(t). The V2I average uplink rate is defined as:

$$ \overline{ R_{V2I}}= \frac{\int^{t^{V2I}_{n,stay}}_{0} R_{V2I}(t){dt}}{t^{V2I}_{n,stay}}. $$

\(\overline {R_{V2I}}\) is the V2I average uplink rate represented as the uplink rate of Vn offloading a task to the VEC server.

Computation model

We have assumed that the vehicle Vn has a computing task, described as \(R_{n} =\left \{B_{n}, D_{n}, t^{max}_{n}\right \}\). Here Bn indicates the total number of required CPU cycles to carry out the task, Dn shows the task data size, which includes the input parameters and program code and the \(t^{max}_{n}\) indicates the maximum tolerable delay of the task Rn, which implies the time to complete the task should not exceed \(t^{max}_{n}\). The task is divided into three parts: computed at the vehicle Vn locally, offloading the remaining task to nearby vehicles Vi by V2V communication, and the final remaining task is offloaded to the nearest VEC server for computation. The ratio of data to total task data Dn is denoted as αn,βn, and γn, respectively. The different ratio will influence the total latency and cost to finish the task. Since computation units are installed in vehicles, the tasks could be computed on the nearby RRV. The computation ability might change from vehicle to vehicle. Therefore, we specify the computation capacity of Vi as fVi. In order to improve the utilization of computing resources, we present the V2V offloading method. In other words, the task of vehicle Vn might be offloaded to its nearby qualified vehicle. Moreover, the priority of each vehicle is to offload the part of the task to its nearby qualified vehicle as much as possible, according to available computing capacity, and the final remaining part to the VEC server.

To further elaborate on the computation model in detail. We introduce the local computing followed by nearby vehicle computing. and finally, the VEC computing is presented.

Local computing

When the source vehicle Vn chooses to perform task Rn locally, \(T^{Local}_{n}\) is defined as the local execution delay of the vehicle Vn, includes the local CPU processing delays. The fVn is described as the computation capacity (i.e., CPU cycles per second) of Vn. Considering the heterogeneity of the vehicles, different vehicles might have different capacities for computation. The local execution delay of task Rn is given as:

$$ t^{l}_{n}= \frac{B_{n}}{fV_{n}}, $$

where αn is the portion of the task Dn, which is computed locally as:

$$ T^{Local}_{n}= \alpha_{n} * t^{l}_{n}. $$

The Φlocal is the cost for local computation. Taking into account the above mentioned time consumption, the total cost for local computing can be specified as:

$$ C^{Local}_{n}= \Phi_{local} * T^{Local}_{n}. $$

Nearby vehicle computing

The selected RRV vehicle Vi processes the task and generates the output after fetching the input data from the Vehicle Vn. The computation intensity of the task mainly relies on the nature of applications. The V2V offloading latency consists of task execution and transmission time. The transmission time from Vn to the vehicle Vi is represented as \(t^{V_{i}}_{up}\), which can be defined as:

$$ t^{V_{i}}_{n,up}= \frac{D_{n}}{\overline{ R^{n,i}_{V2V}}}, $$

where the computation capacity of the nearby vehicle is defined as fVi. The execution time of vehicle Vi can be defined as:

$$ t^{V_{i}}_{n,ex}= \frac{B_{n}}{fV_{i}}. $$

We represent the total offloading latency (i.e., execution and transmission time) from Vn to Vi as \(T^{V2V}_{n}\), which can be expressed as:

$$ T^{V2V}_{n}= \beta_{n}*t^{V_{i}}_{n,up} + \beta_{n}*t^{V_{i}}_{n,ex}, $$

where βn is the portion of the task Dn. We need to check all the vehicles computation resources then we select the qualified vehicle and its detail will follow in “Results and discussions” section. Each of the task has a unique memory and processing power besides has a specified cost of using per unit of time [32]. Therefore, by considering the aforementioned time consumption, the total cost of V2V computing can be defined as:

$$\begin{array}{*{20}l} C^{V2V}_{n}= & \left\{\psi_{V2V}* (\beta_{n}*t^{V_{i}}_{n,up})\right\}+ \\ & \left\{\Phi_{V2V}* (\beta_{n}*t^{V_{i}}_{n,ex})\right\}, \end{array} $$

where ψV2V is the transmission cost and ΦV2V is the V2V computation cost.

VEC computing

The VEC offloading latency comprises of three-parts: the latency to transmit the data to its nearest VEC server, ready time of task on the VEC server, and the execution time on the VEC server. Regarding the delay in transmitting the result back, we tend to neglect it following the footsteps of given references [31, 33]. The latency for transmitting the data to the VEC server can be given by,

$$ t^{VEC}_{n,up} = \frac{D_{n}}{\overline{ R_{V2I}}}. $$

Vehicle Vn offloads the remaining part of the task to the nearest VEC server via the wireless link. During transmission, the vehicle Vn must be within the coverage area of connected RSU. Specifically, the transmission time from vehicle Vn to the VEC server \( t^{VEC}_{n, up} \) must be shorter than the time that vehicle Vn is in coverage of its connected RSU. It can be defined as:

$$ t^{VEC}_{n,up} \le t^{V2I}_{n,stay}. $$

The computation capacity of the VEC server is denoted as fm (i.e., CPU cycles per second). Therefore, the execution time \(t^{VEC}_{n,ex}\) on the VEC server can be calculated as follow:

$$ t^{VEC}_{n,ex} = \frac{B_{n}}{f_{m}}. $$

Further, we define the ready time of a task according to [34].

Definition 1 (Ready Time). The ready time of a task can be expressed as the time at which all the predecessors of the task finished their execution. Therefore, the ready time of task Rn of Vehicle Vn in VEC computing is represented by \(RT^{VEC}_{n,R_{n}}\).

$$ RT^{VEC}_{n,R_{n}}= \max_{k \in pred(R_{n})} t^{VEC}_{n,ex,k}, $$

where pred(Rn) refers to a set of predecessors for task Rn. Therefore, \(\max _{k \in pred(R_{n})} t^{VEC}_{n,ex,k}\) is the time when the predecessors of task Rn that were offloaded to the VEC have completely performed their execution on the VEC server. Moreover, we can identify that the VEC can begin the execution of task Rn only after the task has been fully offloaded to VEC and all the predecessors of task Rn have completed their execution on the VEC.

The total latency for VEC offloading \(T^{VEC}_{n}\) is the sum of the uplink transmission time from vehicle Vn to VEC server \(t^{VEC}_{n,up}\), ready time \(RT^{VEC}_{n,R_{n}}\) and execution time \(t^{VEC}_{n,ex}\). Hence, the total latency for VEC offloading \(T^{VEC}_{n}\) can be defined as:

$$ T^{VEC}_{n}= \gamma_{n} *t^{VEC}_{n,up} + \\ RT^{VEC}_{n,R}+ \gamma_{n} *t^{VEC}_{n,ex}, $$

as mentioned above, many conditions and constraints will affect the latency \(T^{VEC}_{n} \), for instance, the vehicle speed and the ratio γn. When the VEC server finishes the computing, the output results will be sent back to the vehicle Vn. We neglect the transmission time from the VEC server to Vn, since the amount of data as compared to input is very little [33].

The cost is evaluated by the utilization of the processor. The longer the utilization time is, the higher the cost is [35]. By considering the above time consumption, the total cost of VEC computing can be computed as:

$$\begin{array}{*{20}l} C^{VEC}_{n}= & \left\{\psi_{V2I}*\left(\gamma_{n} *t^{VEC}_{n,up}\right)\right\}+ \\ & \left\{\Phi_{VEC} *\left(RT^{VEC}_{n,R_{n}}+ (\gamma_{n} * t^{VEC}_{n,ex})\right)\right\}, \end{array} $$

where γn is the portion of the task Dn, the ψV2I is the transmission cost, and the ΦVEC is the cost of both ready time and execution time on the VEC server. Thus, the total cost to complete a task is denoted as Cn:

$$ C_{n}= \left\{C^{Local}_{n}+ C^{V2V}_{n} + C^{VEC}_{n} \right\}. $$

Moreover, the total cost of whole system can be derived as:

$$ C_{Total}= \sum^{N}_{n=1} \left\{C^{Local}_{n}+ C^{V2V}_{n} + C^{VEC}_{n} \right\}. $$

Problem formulation

In this section, we formulate the partial task offloading as an optimization problem. The aim is to minimize the total offloading cost satisfying constraints like the limits of maximum tolerable delay and computational capacity. The optimization problem is written as follows:

$$\begin{array}{*{20}l} \bold{P1:} & \min_{(\alpha_{n},\beta_{n},\gamma_{n})} C_{Total} \\ {s.t.} \hspace{0.2cm} & \alpha_{n} + \beta_{n} + \gamma_{n} =1 \end{array} $$
$$\begin{array}{*{20}l} & \max \{T^{Local}_{n}, T^{V2V}_{n},T^{VEC}_{n}\} \le t^{max}_{n}, \end{array} $$
$$\begin{array}{*{20}l} & 0 \leq f_{V_{i}} \leq F^{V_{i}}_{n}, 0 \leq f_{m} \leq F^{VEC}_{n}, \forall n \in N \end{array} $$
$$\begin{array}{*{20}l} & t^{VEC}_{n,up} \le t^{V2I}_{n,stay}, \end{array} $$
$$\begin{array}{*{20}l} & t^{V_{i}}_{n,up} \le t^{V2V}_{n,stay}. \end{array} $$

It is reiterated that our optimization goal is to minimize the total cost. Here, the constraint (26a) is the relationship among αn, βn, and γn. (26b) indicates that the time of local, nearby, and VEC server offloading should not exceed the maximum tolerable delay. (26c) shows that the computing resource assigned for the vehicle Vn cannot surpass the total resource \(F^{V_{i}}_{n}\) of the nearby vehicle as well as VEC server, respectively. (26d) specifies that the task for the V2I part should be transmitted completely to the VEC server before the vehicle Vn runs out of the communication range. (26d) indicates that the task for the V2V part should be transmitted completely before the vehicle Vn runs out of the communication range of Vi.

Mobility-Aware partial (MAP) task offloading algorithm

In the V2V network, the global information of vehicles may not be available or cost too much. Besides, it is tough to obtain multi-hop information by a vehicle because of the maximum communication limit constraint as it also increases the complexity. Furthermore, the connection information among vehicles may change over time [36]. The Vn identifies the RHVs and RRVs through the beacons. Since beacons are the packets sent periodically in a broadcast by vehicles to notify about their type, speed, computation capacity, and state [3739]. To obtain beacon messages from multi-hop vehicles in a dynamic environment is time-consuming. Since a vehicle can access multi-hop vehicle in a relay fashion, thus, the time it takes to receive the multi-hop vehicles’ information might not be reliable over time. Moreover, frequent updates of beacon messages might overload the wireless channel, with a potential impact on communication reliability. Hence, with multi-hop, the appropriate quality of service would not be guaranteed [37]. Therefore, in our algorithm, each vehicle must keep the computation capacity of vehicles present in its one-hop communication range. The one-hop information is symbolized as \(\Gamma _{v_{n}}=(f_{V_{1}},f_{V_{2}}, f_{V_{3}},...,f_{V_{j}})\), which represents the computation capacity of all the vehicles available in the communication range of vehicle Vn. We further denote the \(\Gamma _{v_{n}}\) as a set of vehicles present in the communication range of Vn. As the one-hop information is kept locally, we follow the greedy algorithm to choose the best vehicle among all vehicles present in the communication range.

We calculate all the possible available resources of vehicles for offloading a task from the RHV Vn to the vehicles Vi(ViΓvn). Then, we select the vehicle with the least cost as the qualified vehicle among entire candidate vehicles present in its vicinity, where \(C^{V_{i}}_{n}\) is the cost of all candidate vehicles. As can be seen in Fig. 1, that the qualified vehicle is represented by the yellow line while the yellow dotted line represents the vehicle(s) present within Vn’s communication range. The task will transmit from Vn to a qualified nearby vehicle. Thus, the qualified vehicle obtained by our algorithm is as follow:

$$ V_{n,i}= \min\{C^{V_{i}}_{n},V_{i} \in \Gamma v_{n}\}. $$

The vehicle Vn offload the βn part to the qualified vehicle Vn,i. While in the transmission process, the vehicle Vn must stay in the coverage area of the vehicle Vi. Specifically, the transmission time \(t^{V_{i}}_{n,up}\) of vehicle Vn to its nearby vehicle must be less than the stay time of the vehicle Vn in its communication range. We should examine whether the portion of the task βn can be completely delivered before the vehicle runs out of the communication range. Therefore, the following constraints in Eq. (26e) must be satisfied.

After the task has been computed by the vehicle Vn,i, the result will be forwarded to both the vehicle Vn and the nearest VEC server. Thus in case, the result cannot be received by Vn due to the communication range limitations and mobility, the VEC server with its wider coverage area will transfer the result back to the requesting vehicle. However, If there is no vehicle found having enough resources to bear the Vehicle Vn task then Vn will also offload the V2V part, i.e., βn to a VEC server. The algorithm to choose the qualified nearby vehicle is as follow:

Ratio estimation for partial task offloading

The time to transmit the portion of a task of vehicle Vn must satisfy the constraint of the stay time of a selected vehicle. We consider to offload the portion of the task by estimating the offloading time and the velocity of the vehicles as well as meeting the maximum tolerable delay. Therefore, by exploiting the Eqs. (6) and (16), we can formulate as:

$$\begin{array}{*{20}l} & \beta{1}_{n}* \frac{D_{n}}{\overline{ R^{n,i}_{V2V}}} + \beta{1}_{n}*\frac{B_{n}}{fV_{i}} \le t^{max}_{n}, \\ & \beta{1}_{n} \le \frac{t^{max}_{n}}{\{\frac{D_{n}}{\overline{ R^{n,i}_{V2V}}} + \frac{B_{n}}{fV_{i}}\}}, \end{array} $$

that allows to find the value of β1n parameter according to maximum tolerable delay \(t^{max}_{n}\). In addition, the ratio of β2n according to stay time can be calculated as:

$$\begin{array}{*{20}l} & \beta{2}_{n}* \frac{D_{n}}{\overline{ R^{n,i}_{V2V}}} \le \frac{d_{n,i}(t)}{|\overrightarrow \upsilon_{n}-\overrightarrow \upsilon_{i}|}, \\ & \beta{2}_{n} \le \frac{d_{n,i}(t)}{|\overrightarrow \upsilon_{n}-\overrightarrow \upsilon_{i}|*\{\frac{D_{n}}{\overline{ R^{n,i}_{V2V}}}\}}, \end{array} $$

the above equations set an upper limit on the portion of task to be offloaded. Moreover, Algorithm 2 estimates the values of αn, βn, and γn for all vehicles.

The entire process of mobility-aware partial task offloading algorithm is described in Algorithm 3.

In Algorithm 3, each RHV offloads its task locally, to the qualified vehicle, and VEC server according to the portion of αn, βn, and γn, respectively. This process may continue until the maximum tolerable delay is reached. Here, Lines 5-8 are used to compute the αn locally while computing the cost. Lines 9–20 are used to compute the βn offloaded to the qualified vehicle Vn,i that is having a minimum cost. If the vehicle remains in the coverage area before the job is done, then it returns the output directly to the RHV, otherwise, it handovers the output to the nearest VEC server. Lines 21–30 are used to compute the γn portion on the nearest VEC server. If the computation is finished within the stay time then the VEC server immediately transmits the output to Vn. However, if it is not the case then it forwards the output to the VEC server where the vehicle is currently present. Line 33 is used to represent the total offloading cost of the whole system.

Results and discussions

In this section, we analyze our proposed mobility-aware partial task offloading scheme. We consider five RSUs, each having a VEC server located alongside a unidirectional road in an urban mobility road traffic scenario. We also assume that vehicles follow random distribution on the road. In our simulation, we consider the computing speed of each vehicle in the range [106,2×108] cycles/s. We set the computational speed of VEC server as \(F^{VEC}_{n} = 8 \times 10^{8}\)cycles/s [40]. The vehicle speed Vn is \(v^{abs}_{n}= 60km/hour\) [41]. The relative speed among vehicles is set in the range of [10,20]m/s. The vertical distance from RSU to the road is set as e=100m. The communication radius of the RSU coverage area is taken as r=200m. In addition, the radius of V2V communication Climit is set to 150m [28]. Similarly, the White Gaussian noise power N0=3×10−13, V2I and V2V communication bandwidth BV2I=BV2V = 1MHz, the V2I path loss exponent σ=2, and the transmit power of onboard unit Pt=1.3W [33]. As the qualified vehicle (RRV) acts as a mini server for the requested vehicle (RHV). Therefore, we set the communication and computation cost for the vehicle, according to the ratio of the total computational capacity of the VEC server. The detailed setting of simulation parameters is listed in Table 2.

Table 2 Simulation Parameters

In order to show the efficiency of our proposed approach (designated as MAP), we compared its performance with conventional partial offloading (represented as conventional) and MEC partial offloading techniques (i.e., MEC Partial) [27]. In the conventional partial offloading scheme, the task is computed locally and on the VEC server without the support of other vehicles. While the MEC Partial gets offloading support from both VEC server and nearby vehicles along with local computing. However, this approach determines the vehicle according to the stay time to minimize the outage probability.

Figure 2 represents the total computation offloading costs regarding vehicle density on the road. We make a comparison of our proposed MAP scheme with two benchmark schemes, i.e., Conventional and MEC partial offloading schemes, respectively. From Fig. 2, it is observed that the performance of the MAP is best in terms of saving the total offloading cost than the compared schemes, especially when the vehicle’s density is high. Nevertheless, in the low density of vehicles scenario, the difference between the costs of all three schemes are minor. Additionally, a load of computation on each VEC server is low. A great percentage of the offloaded tasks on the VEC servers could be computed within the required time while the vehicles accessing the RSUs. Since the queue size is small, which affects the stay time of the task on the VEC server, hence, lowers the cost. On the other hand, in the case of high density, the burden on the VEC server increases, which may also increase the stay time of the task on the VEC server as well as the cost. Due to communication and computation, the overall costs of the conventional offloading scheme rises fast as the density of the vehicles grows. Moreover, in MEC partial offloading, part of the transmission is offloaded to the vehicle, which has the least cost as compared to the other vehicles in the vicinity at that time. Also, the portion of the task must be uploaded and executed within the stay time of that vehicle. Whereas in our scheme, the portion of the V2V task βn must be uploaded within the stay time to other vehicles and the duration it takes from uploading to execution must be within the maximum tolerable delay. Therefore, the value of βn will be greater, thus reduces more cost and utilizes vehicles’ resources. Our proposed scheme notably reduces the computation and communication cost of the system by fully exploiting the underutilized vehicular resources up to their full capacity.

Fig. 2

The total offloading cost in terms of varying RHVs, when RRVs=10

Figure 3 indicates a decrease in the total offloading cost of the system with an increase in the number of RRVs. From Fig. 3, we observe that when the number of RRVs is considered fixed such as 10, 20, 30, 40, and 50, and keep varying RHVs, the total offloading cost starts declining with the increase in RRVs. This is mainly due to the fact that RHVs may have more opportunities in selecting the best nearby vehicle, which incurs less cost and more benefit. Thus, these result comparisons reveal that in partial task offloading the increase in RRVs significantly influences the overall system performance.

Fig. 3

The total offloading cost versus varying RHVs with fixed No. of RRVs

Figure 4 illustrates the total offloading cost with an increase in the number of RRVs. We evaluate our scheme by examining the impact of both αn and βn in the total offloading cost of the system. From Fig. 4, we observe that as much as the αn contributes to the offloading process, the cost decreases accordingly. Similarly, with the increase in βn, the vehicles’ computational resources can be exploited more effectively. Since the remaining portion for the VEC server remains less as the values of αn and βn increases, thus it becomes cost-effective. Our scheme results further corroborate the numerical analysis.

Fig. 4

The total offloading cost with fixed values of αn and βn, when RRVs=10

Figure 5 shows the total offloading cost versus the maximum tolerable delay. Here the task data size is fixed to 25 MB while the RHVs and RRVs are set to 10, respectively. To observe the role of tolerable delay, we take different values of maximum tolerable delay. Considering practical assumption, at any given time the VEC servers are at the heterogeneous level of computation load. If the vehicles choose a conventional scheme, the complex computational tasks may take a longer time as well as the higher cost to complete their job. Thus, a larger part of the task will be shifted to nearby RSU, eventually leading to increased system cost. Moreover, if the vehicles adopt MEC partial offloading scheme, the vehicle still consumes the VEC computation resources by putting more load. Besides, the tasks that require prompt response would be offloaded, if the vehicle is still in the communication range of the other vehicle until the job is fully completed. Among all the other schemes, we note that the MAP always gets the best performance since more portion of a task can be distributed to nearby vehicles. In Fig. 5, at maximum tolerable delay= 12, compared to the conventional and MEC partial schemes, the total offloading cost is saved by 19% and 15%, respectively. Moreover, our proposed scheme becomes more efficient when the density of vehicles increases. From Fig. 5, it can easily be observed that the MAP scheme is cost-effective under any given tolerable delay.

Fig. 5

The total offloading cost versus maximum tolerable delay tmax, when RHVs=10 & RRVs=10

Figure 6 indicates the relationship between the task data size Dn and the total offloading cost, where the total number of RHV and RRV are set to 10, respectively. From Fig. 6, we observe that as the size of the task increases on the x-axis, the curves of all three schemes give an upward trend, which proves that the size of the task has a direct impact on the total offloading cost. Our proposed scheme achieves the best results as it shows the slowest growth trend. The slope of the conventional curve is greater than that of the other two schemes, showing that the total offloading cost of the system grows rapidly. This indicates that the larger the data volume of the computing task, the major portion will be allotted to the VEC server, thus increasing the total system cost. While in our proposed MAP task offloading scheme, the transmission and computation load on the VEC server will be released as well as avoids network congestion.

Fig. 6

The total offloading cost to data size Dn, when RHVs=10 & RRVs=10

Figure 7 represents a comparison between the total offloading cost with varying RHVs velocity while fixing the speed of RRVS to 60 km/h. In terms of the impact of speed, we note from Fig. 7 that low offloading cost incurs when the RHVs speed is close to the RRVs speed. Since RHVs have stable and longer stay time, consequently, a greater portion of the task is transmitted to RRVs. On the other hand, when the speed of RHVs is less or greater than 60 km/h, it affects the V2V connection time as the RHVs quickly move out of the communication range of RRVs. In that case, a larger part of the task is shifted to the VEC server, which increases offloading cost. We can observe that the proposed scheme outperforms other benchmark schemes.

Fig. 7

The total offloading cost versus varying RHVs velocity, when velocity of the RRVs=60km/h


In this paper, we proposed a mobility-aware partial task offloading algorithm to minimize the total offloading cost. To make it cost-efficient, the vehicle’s available resources are exploited. In this scheme, the task is divided into three parts. We further determined the allocation ratio among these parts according to the vehicles’ mobility. Moreover, we estimate the transmission rates for V2V and V2I communication in the light of practical assumptions. Simulation results demonstrate that nearby vehicle communication and computation resources not only reduced the cost but also offload the burden of the VEC servers, especially which are deployed in a dense urban environment. Extensive simulation results demonstrated our proposed scheme’s effectiveness against the compared schemes. Although the results provided in this work significantly contribute to the state-of-the-art, yet they can be improved in many ways. One of the major challenges in partial task offloading for vehicles is mobility, which greatly affects the V2V and V2I communication. In this regard, our work would be extended to highway scenarios, and also the task offloading can be improved by incorporating mmWave communications or considering 5G New Radio. These challenging yet interesting extensions are left for our future work.

Availability of data and materials

The random numbers are generated to check validity. Therefore, no supporting dataset is available.


  1. 1

    Abolfazli S, Sanaei Z, Ahmed E, Gani A, Buyya R (2013) Cloud-based augmentation for mobile devices: motivation, taxonomies, and open challenges. IEEE Commun Surv Tutor 16(1):337–368.

    Article  Google Scholar 

  2. 2

    Technology and Computing Requirements for Self-Driving Cars.

  3. 3

    Bitam S, Mellouk A, Zeadally S (2015) Vanet-cloud: a generic cloud computing model for vehicular ad hoc networks. IEEE Wirel Commun 22(1):96–102.

    Article  Google Scholar 

  4. 4

    Jang I, Choo S, Kim M, Pack S, Dan G (2017) The software-defined vehicular cloud: A new level of sharing the road. IEEE Veh Technol Mag 12(2):78–88.

    Article  Google Scholar 

  5. 5

    Taleb T, Dutta S, Ksentini A, Iqbal M, Flinck H (2017) Mobile edge computing potential in making cities smarter. IEEE Commun Mag 55(3).

  6. 6

    You C, Huang K, Chae H, Kim B-H (2016) Energy-efficient resource allocation for mobile-edge computation offloading. IEEE Trans Wirel Commun 16(3):1397–1411.

    Article  Google Scholar 

  7. 7

    Lin B, Zhu F, Zhang J, Chen J, Chen X, Xiong NN, Mauri JL (2019) a time-driven data placement strategy for a scientific workflow combining edge computing and cloud computing. IEEE Trans Ind Inform 15(7):4254–4265.

    Article  Google Scholar 

  8. 8

    Huang X, Yu R, Kang J, Zhang Y (2017) Distributed reputation management for secure and efficient vehicular edge computing and networks. IEEE Access 5:25408–25420.

    Article  Google Scholar 

  9. 9

    Wang S, Zhang X, Zhang Y, Wang L, Yang J, Wang W (2017) A survey on mobile edge networks: Convergence of computing, caching and communications. IEEE Access 5:6757–6779.

    Article  Google Scholar 

  10. 10

    Chen X, Chen J, Liu B, Ma Y, Zhang Y, Zhong H (2019) androidoff: Offloading android application based on cost estimation. J Syst Softw 158:110418.

    Article  Google Scholar 

  11. 11

    Chen X, Chen S, Ma Y, Liu B, Zhang Y, Huang G (2019) an adaptive offloading framework for android applications in mobile edge computing. Sci China Inf Sci 62(8):82102.

    Article  Google Scholar 

  12. 12

    Ahmed M, Li Y, Waqas M, Sheraz M, Jin D, Han Z (2018) A survey on socially aware device-to-device communications. IEEE Commun Surv Tutor 20(3):2169–2197.

    Article  Google Scholar 

  13. 13

    Waqas M, Niu Y, Li Y, Ahmed M, Jin D, Chen S, Han Z (2019) Mobility-aware device-to-device communications: Principles, practice and challenges. IEEE Commun Surv Tutor.

  14. 14

    Oteafy SM, Hassanein HS (2018) Iot in the fog: A roadmap for data-centric iot development. IEEE Commun Mag 56(3):157–163.

    Article  Google Scholar 

  15. 15

    Anawar MR, Wang S, Azam Zia M, Jadoon AK, Akram U, Raza S (2018) Fog computing: An overview of big iot data analytics. Wirel Commun Mob Comput 2018.

  16. 16

    Hou X, Li Y, Chen M, Wu D, Jin D, Chen S (2016) Vehicular fog computing: A viewpoint of vehicles as the infrastructures. IEEE Trans Veh Technol 65(6):3860–3873.

    Article  Google Scholar 

  17. 17

    Raza S, Wang S, Ahmed M, Anwar MR (2019) A survey on vehicular edge computing: Architecture, applications, technical issues, and future directions. Wirel Commun Mob Comput 2019.

  18. 18

    Zhang K, Mao Y, Leng S, Vinel A, Zhang Y (2016) Delay constrained offloading for mobile edge computing in cloud-enabled vehicular networks In: Proceeding of the 8th International Workshop on Resilient Networks Design and Modeling, 288–294.. IEEE.

  19. 19

    Ye D, Wu M, Tang S, Yu R (2016) Scalable fog computing with service offloading in bus networks In: 2016 IEEE 3rd International Conference on Cyber Security and Cloud Computing (CSCloud), 247–251.. IEEE.

  20. 20

    Feng J, Liu Z, Wu C, Ji Y (2019) Mobile edge computing for the internet of vehicles: Offloading framework and job scheduling. IEEE Veh Technol Mag 14(1):28–36.

    Article  Google Scholar 

  21. 21

    Zhang K, Mao Y, Leng S, Maharjan S, Zhang Y (2017) Optimal delay constrained offloading for vehicular edge computing networks In: Proceeding of the International Conference on Communications, 1–6.. IEEE.

  22. 22

    Lai Y, Yang F, Zhang L, Lin Z (2018) Distributed public vehicle system based on fog nodes and vehicular sensing. IEEE Access 6:22011–22024.

    Article  Google Scholar 

  23. 23

    Ren J, Yu G, Cai Y, He Y, Qu F (2017) Partial offloading for latency minimization in mobile-edge computing In: Proceeding of the Global Communications Conference, 1–6.. IEEE.

  24. 24

    Luoto P, Bennis M, Pirinen P, Samarakoon S, Horneman K, Latva-Aho M (2017) Vehicle clustering for improving enhanced lte-v2x network performance In: Proceeding of the European Conference on Networks and Communications, 1–5.. IEEE.

  25. 25

    Zhang K, Mao Y, Leng S, He Y, Zhang Y (2017) Mobile-edge computing for vehicular networks: A promising network paradigm with predictive off-loading. IEEE Veh Technol Mag 12(2):36–44.

    Article  Google Scholar 

  26. 26

    Li L, Zhou H, Xiong SX, Yang J, Mao Y (2019) Compound model of task arrivals and load-aware offloading for vehicular mobile edge computing networks. IEEE Access 7:26631–26640.

    Article  Google Scholar 

  27. 27

    Bozorgchenani A, Tarchi D, Corazza GE (2018) Mobile edge computing partial offloading techniques for mobile urban scenarios In: Proceeding of the Global Communications Conference, 1–6.. IEEE.

  28. 28

    Wang H, Li X, Ji H, Zhang H (2018) Federated offloading scheme to minimize latency in mec-enabled vehicular networks In: Proceeding of the Globecom Workshops, 1–6.. IEEE.

  29. 29

    Cheng X, Wang C-X, Ai B, Aggoune H (2013) Envelope level crossing rate and average fade duration of nonisotropic vehicle-to-vehicle ricean fading channels. IEEE Trans Intell Transp Syst 15(1):62–72.

    Article  Google Scholar 

  30. 30

    Zheng K, Liu F, Zheng Q, Xiang W, Wang W (2013) A graph-based cooperative scheduling scheme for vehicular networks. IEEE Trans Veh Technol 62(4):1450–1458.

    Article  Google Scholar 

  31. 31

    Wang Y, Sheng M, Wang X, Wang L, Li J (2016) Mobile-edge computing: Partial computation offloading using dynamic voltage scaling. IEEE Trans Commun 64(10):4268–4282.

    Google Scholar 

  32. 32

    Aminizadeh L, Yousefi S (2014) Cost minimization scheduling for deadline constrained applications on vehicular cloud infrastructure In: Proceeding of the International Conference on Computer and Knowledge Engineering, 358–363.. IEEE.

  33. 33

    Mazza D, Tarchi D, Corazza GE (2014) A partial offloading technique for wireless mobile cloud computing in smart cities In: Proceeding of the European Conference on Networks and Communications, 1–5.. IEEE.

  34. 34

    Guo S, Liu J, Yang Y, Xiao B, Li Z (2018) Energy-efficient dynamic computation offloading and cooperative task scheduling in mobile cloud computing. IEEE Trans Mob Comput 18(2):319–333.

    Article  Google Scholar 

  35. 35

    Fan Y, Zhai L, Wang H (2019) Cost-efficient dependent task offloading for multiusers. IEEE Access 7:115843–115856.

    Article  Google Scholar 

  36. 36

    Lu Z, Sun X, La Porta T (2016) Cooperative data offloading in opportunistic mobile networks In: Proceeding of the Annual INFOCOM Conference on Computer Communications, 1–9.. IEEE.

  37. 37

    Bazzi A, Masini BM, Zanella A, Thibault I (2017) on the performance of ieee 802.11 p and lte-v2v for the cooperative awareness of connected vehicles. IEEE Trans Veh Technol 66(11):10419–10432.

    Article  Google Scholar 

  38. 38

    Feng J, Liu Z, Wu C, Ji Y (2018) mobile edge computing for the internet of vehicles: Offloading framework and job scheduling. IEEE Veh Technol Mag 14(1):28–36.

    Article  Google Scholar 

  39. 39

    Shah SS, Ali M, Malik AW, Khan MA, Ravana SD (2019) vfog: A vehicle-assisted computing framework for delay-sensitive applications in smart cities. IEEE Access 7:34900–34909.

    Article  Google Scholar 

  40. 40

    Munoz O, Pascual-Iserte A, Vidal J (2014) Optimization of radio and computational resources for energy efficiency in latency-constrained application offloading. IEEE Trans Veh Technol 64(10):4738–4755.

    Article  Google Scholar 

  41. 41

    Chen S, Hu J, Shi Y, Peng Y, Fang J, Zhao R, Zhao L (2017) Vehicle-to-everything (v2x) services supported by lte-based systems and 5g. IEEE Commun Stand Mag 1(2):70–76.

    Article  Google Scholar 

Download references


Not applicable.


This work was supported by the National Key R&D Program of China (2018YFB1402801), and Funds for Creative Research Groups of China (61921003).

Author information




Conceptualization, Salman Raza; Methodology, Salman Raza and Manzoor Ahmed; Resources, Salman Raza; Validation, Muhammad Rizwan Anwar and Muhammad Ayzed Mirza; Visualization, Muhammad Rizwan Anwar and Muhammad Ayzed Mirza; Writing – original draft, Salman Raza and Manzoor Ahmed; Writing – review & editing, Wei Liu, Qibo Sun and Shangguang Wang. The author(s) read and approved the final manuscript.

Corresponding author

Correspondence to Wei Liu.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Raza, S., Liu, W., Ahmed, M. et al. An efficient task offloading scheme in vehicular edge computing. J Cloud Comp 9, 28 (2020).

Download citation


  • Vehicular edge computing
  • Task offloading
  • Mobility
  • Mobile edge computing
  • Vehicular networks