 Research
 Open Access
 Published:
Efficient resource allocation and dimensioning of media edge clouds infrastructure
Journal of Cloud Computing volume 6, Article number: 27 (2017)
Abstract
Media Edge Cloud Data Centers (MECDCs) that are interconnected by a Metro Network were selected as infrastructure to enhance the Quality of Experience (QoE) for end users of multimedia applications. Unlike the traditional Data Centers, MECDCs, which are kept closer to the user, have limited availability of resources at a given Data Center. Therefore, it is of paramount importance for Infrastructure service providers to efficiently dimension and use the media resources in an environment where the applications have high resource demand and the infrastructure has limited availability. To perform this task dynamically, we first propose a resource allocation strategy that considers the physical characteristics of the networking layer while minimizing the costs of deploying media applications. Second, we analyze the different configurations of the networking layer in order to enhance the use of MECDCs resources and the QoE for the endusers. Simulation results show a clear advantage of this proposed optimizationbased approach over the benchmarks in terms of provisioning costs, blocking ratio and resource use.
Introduction
The evolution in network technologies is changing the way in which communications are designed. With the development of Web 2.0 that supports multimedia applications, customer expectations of rich media provisioning have increased. Media applications are becoming essential to our everyday life [1]; their popularity is increasing both because of the spread of social networks and the ease with which they can share audio, videos, and streaming services.
Media applications can vary from video sharing (such as YouTube and Netflix), to online radio (Spotify) or image sharing (Pinterest). Most of these services demand a significant amount of media processing and have stringent Quality of Service (QoS) requirements [2]. This is particularly the case with User Generated Content (UGC), with its huge volume of short videos and its significantly fluctuating user demand. Cloud computing is gaining enormous momentum as a costefficient solution for providing media services with storage and processing requirements [3]. Largescale public clouds that offer their computing, storage, and network resources in the form of InfrastructureasaService (IaaS) have attracted Cloud Service Providers (CPs) [4–6].
A current area of rapid innovation is the use of Media Edge Cloud Data Centers (MECDCs) using several hundred servers [7, 8]. This MEC model allows CPs to reduce their capital costs and to benefit from the elasticity of the cloud by placing Virtual Machines (VM) running media processing tasks closer to the endusers.
The Edge Cloud infrastructure uses smaller DCs located in the last mile, closer to major population centers, in order to honor Service Level Agreement (SLA) contracts for the QoS requirements of highly interactive content delivery: online searching (such as Google), social networking (such as Facebook), video streaming, and so on [8]. This requires a topology located in the metropolitan area that can transfer media data among DC locations. Metro optical fiber networks have been investigated as the best way to guarantee an efficient data transport service among MECDC locations [9]. Metro optical networks can indeed transport large files with short time delays, making for a suitable architecture for applications where avoiding delay is critical, such as online gaming, video streaming and image sharing. Advancements in optical communication have enabled the grooming of lowgranularity traffic using optical Multiservice Provisioning Platform (MSPP) transponders [10]. Examples are the media applications in social networks, often in the form of small files like profile pictures or short videos.
To ensure the SLA of the hosted applications, it is important that the cloud substrate resources and the link latency constraints are satisfied. The presence of MECDCs closer to the user gives service providers two advantages over traditional DCs, (a) a reduction in the cost/bit, and (b) an increased performance and throughput of the application. On the other hand, compared to traditional DC networks, MECDCs have limited resources at a given site. Smart site selection is important in order to ensure both QoS and a minimum cost for the CPs. It is our belief that overall substrate resource use is improved if the incoming media cloud request is mapped to the MECDCs simultaneously for both its networking and computing requirements. Our approach, proposed in “Media edge cloud resource allocation approach” section, has two crucial improvements over related works in the literature [11–20, 20]. The first improvement is a resource allocation strategy that uses the physical characteristics of the networking layer to reduce the deployment cost. The strategy, referred to as LCGMEC, uses Column Generation as a largescale optimization technique for mapping media cloud requests to the MECDCs. LCGMEC defines a media cloud request as a Virtual Network (VN); a set of nodes and a set of links with QoS requirements. The second improvement is to evaluate different networking configurations so as to determine which can provide better QoS.
The remainder of this paper is organized as follows. “Related works” section describes other work related to our proposal. “Media cloud computing” section defines the MECDCs resources allocation problem. “Media edge cloud resource allocation approach” section presents the mapping solution of media cloud requests into MECDCs infrastructure. “Numerical results” section introduces benchmarks and simulation results to evaluate performance. “Conclusion” section concludes the paper.
Related works
The literature contains a number of approaches to efficiently solving the challenges of media cloud requests mapping. This can be defined as mapping a set of incoming media cloud requests on the MECDCs infrastructure so as to enhance goals such as reducing cost, increasing profit, or network use. It is important to ensure that the QoS constraints of the incoming requests are satisfied. The challenges mainly result from the increased computational complexity when media cloud computing and the requirements of networking resources are considered jointly.
To overcome these issues, proposals in the literature have considered either relaxing networking QoS by focusing only on the computing requirements [11, 13] or adopting a twophase approach [14, 15, 18], which first preselects the mapping of hosting nodes and then maps virtual links.
The authors in [11] present a Binpacking approach that dynamically maps Virtual Machines (VMs) into Physical Machines (PMs). As a result, networking requirements are not considered in the optimization model, which may mean that QoS requirements are not met.
In [12], the authors introduce an optimization algorithm based on a multiobjective formulation that optimizes the power used as well as the load balancing among DC servers. But the cost of networking equipment is not considered. The model therefore lacks a realistic evaluation of the economic benefits of cloud service requests and could also result in QoS requirements not being met.
In [13], the proposal is for two cloud VN embedding approaches using an optical network. The author’s focus is to minimize the power and spectrum used. The proposal does not detail the dimensioning of the optical layer and the impact of networking parameters on the quality of the services offered. The parameters of the optical networking layer are likewise not considered.
In [14], the authors use a twophase mapping approach, which first preselects the mapping of hosting nodes and then maps virtual links. Node mapping and link mapping are performed independently. Hence, nonjoin node and link embedding may result in a high number of blocked requests and in underused resources, meaning less profit for the cloud provider. In addition, the mapping is done using heuristic approaches, which may make the solution less than optimal.
The authors in [15] propose a mathematical programming scheme in order to coordinate node and link mapping. The proposal handles online Virtual Network requests and introduces a better correlation between virtual node and virtual link embedding phases. However, the solution seems less satisfactory than simultaneous mapping.
In [18], the authors propose a greedy algorithm that jointly optimizes the global workload assignment and the local VM allocation in order to minimize the resource cost under the response time requirements. While the focus is on the media cloud request and the stringent QoS requirements, the analysis does not study the impact of the key networking layer factors on performance.
The authors of [19] study a joint optimization model that is geographically distributed and interconnected using an optical network. However, the resource allocation applies more to a general IaaS request model. The impact of the networking layer on a higher QoS received much less attention.
The proposal in [20] is possibly the closest to our work. The lighttrial approach adapts well for multicasting applications. The proposal also considers the characteristics of the optical layer. However, the authors do not analyze the impact of the networking layer on the QoS for media cloud applications and their resource allocation proposal is better suited to general IaaS requirements.
In [21], the authors propose a nextgeneration, ubiquitous, converged infrastructure. The proposal connects fixed and mobile end users with Data Centers through a heterogeneous, networkintegrating optical metro network, based on timeshared network technology and wireless access. The approach ensures allocation of the required resources across all technology domains to support their specific characteristics such as end users’ mobility.
In [22], the authors provide a VN planning scheme with the development of Wavelength Division Multiplexing (WDM) techniques and cloud computing. The approach uses a united virtualization of optical and server resources that collaboratively incorporates the optical backbone into Data Centers. The authors demonstrate the effectiveness of their strategy in the context of power outages and evolving recovery.
In [23], the authors use compressive sensing (CS) techniques to support scalable service provisioning in converged optical/wireless clouds. They claim that the CS techniques achieve optimal service provisioning with significantly reduced control and less computational complexity.
In [24], the authors use a column generation approach for the VN embedding. Their focus is on ensuring resiliency for the accepted VN requests. The network is formed by interconnected, geodistributed DCs that are not limited by their computing resources. In this paper, the network topology is composed of Edge DCs with limited resources. As a result, the focus of our work is on ensuring QoS even with those limited resources.
In [25], the authors propose a VN embedding approach on a wireless optical network. Incoming VN requests are mapped to a local Wireless Mesh Network (WMN) so as to reduce the transmission power. If a request is not satisfied by the WMN, it is mapped to the Optical Edge Network. The main constraints, however, are the computational power and the wavelength availability. In addition, the modeling ignored several optical network attributes such as node grooming capacity and different optical architectures. In our work, we consider the optical network characteristics in order to make an informed decision on the VNs’ mapping location and the availability of DC resource.
In [26], the authors propose an approach to determine the risk associated with a given Virtual Machine using threat and vulnerability factors. These factors identify which incoming VN requests can be risky. The main decision on the location of VM is governed by how risky the VN requests are. In our work, by contrast, the decision is based on the characteristics and performance of the network.
Most of these works had the following common features: (1) the use of twophase VN mapping, (2) the general Infrastructure as a Service (IaaS) requirements, and (3) the mapping of VN request one at a time (i.e., online). Our proposal differs as follows: (1) for each accepted media cloud request, we calculate the optimal oneshot networking and hosting scheme with respect to QoS requirements (latency, bandwidth, computing, and mapping location). This guarantees a better use of MECDCs resources and an increased number of accepted VN requests, (2) media cloud requests are served by batch (see “Smallbatch MEC mapping” section) which allows us to calculate a better mapping solution over time.
Media cloud computing
Media edge cloud architecture
Two classes of DCbased cloud architecture can be identified. They are (1) large, geographically distributed DCs, and (2) MECDCs as shown in Fig. 1. Large DCs are centralized and highly manageable, thereby providing an economy of scale. However, geographically distributed DCs have inherent limitations in service hosting. Simple economic factors determine that they are built only in locations where capital and operational costs are low. Large DCs are therefore generally located far from endusers. This may result in failed QoS requirements (such as latency and bandwidth/throughput) as well as higher networking costs. To address these drawbacks, Edge DCs (such as MicroDCs and Edge Cloud) have been proposed. This new class of smallscale DCs, known as MECDCs, is well suited to service hosting. In MEC architecture, media content and processing are pushed to the edge of the cloud based on user profile.
Networking transport architecture
As mentioned, MECDC locations are interconnected using a metro optical network. A metro optical network is able to transport large files with short time latency, making it suitable for criticaldelay applications such as online video gaming, video streaming and image sharing. An optical lightpath transport architecture is used to allow grooming of lowgranularity traffic using optical MSPP transponders. We provide details in the following sections on the transport architecture.
LightPath transport architecture
Data flows among MECDC locations are transported using a set of lightpaths built on wavelengths available on each fiber link that interconnects MEC locations. A lightpath is an optical routing path that allows communication between the set of nodes along the path. A lightpath uses only one wavelength between its endnodes. A Multiservice Provisioning Access Platform (MSPP) transponder is used to add signals to or drop them from a wavelength at a given MECDC node.
Add/Drop media request in edge cloud node
A MultiService Provisioning Platform (MSPP) fabric is set up in each node of the interconnecting MECDC locations. MSPPs allow data flow to be added to or dropped from network transport signals according to traffic demand. Apart from the conventional SONET signals, MSPPs handle a wide variety of client signals (Gigabit Ethernet, ATM, IP, and so on). Furthermore, the MSPP equipment is modular and can be configured by selecting the component appropriate for the desired networking task for a given node [10]. Of these components, transponders, the interfaces between the optical and the electrical domains, make up the main element of cost on a MSPP fabric. MSPP transponders are used to add/drop and groom lowclient signals into wavelengths.
Media edge cloud resource allocation approach
Resource allocation is one of the most important aspects of MECDCs management, since it is directly related to the cost and the QoS requirements of media cloud services. Efficient resource allocation has a positive impact on the service provider’s profitability. The resource allocation problem is to minimize hosting and networking costs while preserving QoS constraints. The QoS requirements are: (1) Specific data transfer capacity with a short latency, (2) Predefined computing and graphic processing capacities, (3) Storage and memory capacity, and (4) Specific processing order of each task to compose the media cloud request. MECDCs Resources are allocated to the incoming media cloud requests in a batchwise fashion, as described in the following section.
Smallbatch MEC mapping
In a realistic scenario, media cloud requests usually do not arrive one after another at regular time intervals [15]. A realistic mapping scenario of MEC requests may therefore involve an approach, in which MEC requests are queued and then processed in small batches in order to optimize costs for the MEC provider over time [17]. To do so, we divide the mapping planning time into a set of consecutive short periods (windows) and we describe the demand with a set of MEC requests, one for each new window. From one period to the next, we assume that most MEC requests remain the same, representing as an example the global steadystate of the longterm Service Level Agreement (SLA) between the provider and its customers. The change in demand can therefore be measured with a turnover rate, such as 20% of incoming (new) and 30% of leaving (drop or ending) requests. Expressed more precisely, let P be the set of mapping planning periods of time and M(0) the initial set of MEC requests. The set of MEC requests M(p) indexed by p≥1 is defined as:
Where M(p−1) is the set of accepted MEC requests at the end of period p−1. M _{NEW}(p) is the set of new incoming and M _{DROP}(p) is the set of ending MEC requests at the start of period p. Where NEW and DROP are randomly selected between 10 and 40%, giving us a range of cases from slowly fluctuating (10%) to quickly changing (40%) of MEC demand.
Mathematical modeling
To evaluate the merits of the allocation approach of MECDCs resources, we propose a mathematical formulation as follows.
The MECDCs infrastructure is represented by a directed graph G _{ m }=(H _{ m },E _{ m }), where H _{ m } is the set of MECDCs locations, and E _{ m } is the set of optical fiber links. The network topology is composed of either a Unidirectional Path Switching Ring (UPSR), consisting of one optical directional fiber, or a Bidirectional Path Switching Ring (BLSR), consisting of two fibers, one in each direction.
Endtoend delay between two MECDC nodes is also impacted by the number of MSPP transponders used in the lightpath. Figure 1 shows an example of MECDCs, where each physical optical link e∈E _{ m } between two MECDC locations offers W wavelengths, each of which has a bandwidth capacity B. We define a lightpath l as a set of consecutive optical links. We denote by L the set of lightpaths available to serve the networking requirements of media cloud requests M _{ n },n∈N. Candidate lightpaths can be calculated using a Kshortest path algorithm [27] for all couples of MECDC locations that are connected using optical WDM links. Either ring or mesh topologies can be used in calculating the paths. The ring topology is evaluated in “Numerical results” section. It is considered because most current metro topologies are ringbased and still favored over mesh topologies because of their inherent simplicity of design and low OPEX.
Client signals can be added and dropped on any MECDC hosting node u using MSPP transponders N _{MSPP}(u). In addition, each MECDC hosting node u∈H _{ m } offers a Compute Processing Unit (CPU) capacity P _{ u }, a Graphic Processing Unit (GPU) capacity G _{ u }, a memory capacity M _{ u }, and a storage capacity S _{ u }. Table 1 shows the generic description of these parameters.
Similarly, a media cloud request is divisible into a set of interdependent atomic tasks modeled as a weighted, undirected Task Dependencies Graph (TDG) M _{ n }=(T _{ n },A _{ n }), where n∈N={1,2,…,N}. T _{ n } denotes the set of Tasks and A _{ n } the set of directional virtual networking links between tasks that form media cloud request M _{ n }. Figure 2 shows an example of a TDG graph. A media cloud request could be, for example, a MapReduce request [32] where the input is a huge amount of data split into smaller parts. The Mapper code is executed on every part, and all the results after Sort/Shuffle are sent to one or more reducers that merge all the results into one. More specifically, according to the dependency graph, task T1 is the Splitter/initializer, T2, T3 and T4 are the Mapper, T5 is the Shuffle/Sorter and T6 is the Reducer that combines the results.
Each task t∈T _{ n } has a set of VM cloud computing requirements: (a) CPU capacity p _{ t }, (b) GPU capacity g _{ t }, (c) memory processing requirement m _{ t }, (d) storage capacity s _{ t }, and (e) processing order o _{ t } with respect to any t∈T _{ n }. Similarly, each link a∈A _{ n } has networking requirements: (a) data transfer capacity \(\phantom {\dot {i}\!}b_{tt'}\) between media service subtasks t and t ^{′}, and (b) maximum number of OpticalElectricalOptical (OEO) conversions \(\phantom {\dot {i}\!}h_{tt'}\) of an optical path that links tasks t and t ^{′}, as the time delay in the transport signal is affected mainly by the number of conversions between electrical and optical domains. It should also be noted that loading client signals requires an OEO conversion of the transport signal (wavelength). Table 2 shows the description of these parameters.
Each media cloud request M _{ n } can be divided into hosting and network mapping. Each virtual hosting node t∈T _{ n } from a media cloud request n is mapped into substrate hosting nodes u∈H _{ m } by mapping M _{N}:T _{ n }↦H _{ m }.
Similarly, each virtual link a∈A _{ n } belonging to a media cloud request n is mapped to an optical lightpath \( l \in l_{u,v}^{a} \subset L\) by mapping M _{L}:A _{ n }↦L, where (u,v) are MECDC nodes assigned to virtual nodes, respectively the (s,d) source and destination nodes of virtual link a.
When a media cloud request arrives, the CP has to determine whether to accept or reject it. This decision is largely based on the QoS requirements of the request, the availability of MECDCs resources, and the economic cost of accepting the request. Since we are focusing on cloud computing and optical networking, we propose to calculate the mapping cost of each service request n, G _{ n }=(T _{ n },A _{ n }), as follows.
Details of mapping cost calculation are provided in the following section.
Column Generation formulation for MECDCs resource allocation (LCGMEC)
To avoid the scalability issue identified in the Integer Linear programming formulation [28], we propose the approach LCGMEC using the Column Generation technique [29]. We reformulate the resource allocation problem in terms of Independent Media Cloud Configurations (IMCCs). An IMCC configuration (Fig. 3) defines the mapping solution of at least one MEC request; it is represented by the set of substrate nodes used to handle resource requirements (CPU, memory, GPU and storage) and the links/lightpaths, all with the same wavelength, used to connect these nodes. We denote by C the set of all possible IMCCs. The resource allocation problem can then be formulated with respect to the variables (λ _{ c }),c∈C. Here, variable λ _{ c }=1 if IMCC c is used in the mapping solution and 0 otherwise. In the new formulation, the MEC mapping problem is to choose a maximum of W IMCCs, as W wavelengths are available in each optical fiber link. Each IMCC is mapped on one WDM wavelength. The resulting configuration corresponds to what is known as the master problem in a column generation approach [29], while each IMCC configuration corresponds to what is known as the pricing problem.
An IMCC configuration c∈C is defined by the vector \((a_{n}^{c})_{n \in N}\) such that: \(a_{n}^{c}=1\) if IMCC c serves media cloud request M _{ n } and 0 otherwise. We denote by COST_{ c } the cost of configuration c. This corresponds to the costs of the resources used (hosting and networking) for the set of MEC requests granted by IMCC c.
The use of Column Generation divides the original problem into a master problem and a pricing problem: (1) The problem of finding the best subset among the already generated IMCCs that minimize the objective function: mapping resources cost, and (2) the problem of generating an additional column (IMCC) to the constraint matrix of the master problem.
Master problem
The master problem, denoted by IMCCILP, is defined as follows:
Objective function:
where
\(c_{u}^{s}\), \(c_{u}^{p}\), \(c_{u}^{m}\), \(c_{u}^{g}\) and \(c_{e}^{b}\) are respectively unit costs of storage, CPU, memory, GPU and bandwidth at MECDC location u and optical link e. T ^{c}(u) is the number of MSPP transponders used in node u by IMCC c. c _{MSPP} denotes the unit cost of a MSPP transponder. B ^{c}(e) is the bandwidth used on optical link e by IMCC c. We also denote by S ^{c}(u), P ^{c}(u), M ^{c}(u) and G ^{c}(u) respectively the storage, CPU, memory and GPU in MECDC location u used by IMCC c. We note that the objective function Eq. (3) minimizes the mapping cost of accepted MEC requests.
Constraints
Equations (5), (6), (7) and (8) guarantee the respect of available physical storage, CPU, GPU and memory capacity respectively. Equation (9) defines the number of WDM wavelengths available per optical link (fiber) l to guarantee the transport of data flows among MEC locations. Equation (10) defines the number of MSPP transponders available in optical network node u to add/drop/groom request flows to/from/with the wavelength available in the optical fibers connecting MECDC locations. Equation (11) guarantees that service requests can be satisfied with the available MECDCs resources.
Pricing problem
As mentioned previously, the pricing problem is to generate an additional configuration (IMCC), an additional column, for the constraint matrix of the current master problem. It is defined as follows.
Let α _{ u }, β _{ u }, γ _{ u }, η _{ u }, u _{0}, ζ _{ u }, θ _{ u } and ψ _{ n } be the dual variables associated with constraints (5), (6), (7), (8), (9), (10) and (11) respectively. Then, the reduced cost of variable λ _{ c } can be written:
We now express (12) in terms of the decision variables of the pricing problem. Those variables are defined as follows.

z _{ n }=1 if media cloud request M _{ n } is served by IMCC c and 0 otherwise.

y _{ u }=1 if a MSPP transponder is installed in MECDC location u∈H _{ m } and 0 otherwise.

\(x_{l}^{a} = 1\) if virtual link a∈A _{ n } is assigned to lightpath l∈L _{ a } and 0 otherwise, where L _{ a } is the set of lightpaths where their lengths do not exceed the number of OEO conversion \(\phantom {\dot {i}\!}h_{tt'}\) allowed for a virtual link a=(t t ^{′})∈A _{ n },n∈N.

\(x_{u}^{t} =1\) if task t∈T _{ n } is assigned to MECDC location u∈H _{ m } and 0 otherwise.
Next, we derive the relations between the pricing variables and the coefficients of the master problem. For each c∈C and n∈N, \(a_{c}^{n} =z_{n} \), and for each node u∈H _{ m }, we have:
Constraints:
Mapping of media cloud service tasks

Mapping is done for all tasks of an accepted media cloud request M _{ n }.
$$ z_{n} \leq \sum\limits_{(u,u') \in H_{m}^{2}} x_{u}^{t}\, x_{u'}^{t'} \,; \,\, (tt')=a \in A_{n}, n \in N. $$(18) 
A task t of an accepted request M _{ n } is assigned to only one MECDC location node u.
$$ \sum\limits_{u \in H_{m}}x_{u}^{t} \leq z_{n} \,\,;\,\, t \in T_{n},\,\, n \in N. $$(19)
Mapping of media cloud request link
Where \(L_{u,u'}^{a}\) is the set of lightpaths between node u and u ^{′} having a number of OEO conversions less than allowed value \(\phantom {\dot {i}\!}h_{tt'}\) for virtual link \(\phantom {\dot {i}\!}a=(tt')\). The consideration of \(\phantom {\dot {i}\!}h_{tt'}\) has the following two major roles in determining the optimal solution:

1.
It ensures that the number of OEO conversions are not more than the maximum allowed for each virtual link of any media cloud request. This is important because metro networks often are not composed of optical amplifiers. The amplifiers are used mostly in Wide Area Networks, with a significant distance between source and destination.

2.
It also ensures that data is placed in the closest MECDC, which can ensure QoS and cost requirements.
Accordingly, if request M _{ n } is accepted, then at least one lightpath l is assigned to allow data transfer between tasks t and t ^{′} assigned respectively to MECDC locations u and u ^{′}.
Number of MSPP transponders
An add/drop/grooming MSPP transponder port is set up in a MECDC location node u if at least one constituent task of a media cloud request is assigned to this location. We note that M is constant and should be equal to or greater than the maximum number of virtual nodes a∈A _{ n } that can be mapped to substrate node u. M=G×N _{MSPP}(u), where N _{MSPP}(u) is the number of available MSPP transponder in node u and G is the grooming factor, i.e., the number of client signals that can be uploaded on each wavelength using a MSPP transponder.
Wavelength bandwidth capacity
where b _{ a } is the bandwidth transfer requirement between any pair of tasks (t,t ^{′})=a∈A _{ n }. B is the bandwidth capacity of the wavelength supporting all lightpaths and \(\delta _{l}^{e}= 1\) if lightpath l uses optical substrate link e.
Wavelength grooming factor
G is the wavelength grooming factor that allows control of the number of media cloud requests that can be loaded on a given wavelength (IMCC). By so doing, CP controls the load and congestion of optical network links. It allows the Cloud Provider to define a maximum use on the most congested links. In addition, it avoids an optical link becoming overloaded, thereby improving the latency experienced.
Linearization of quadratic terms
Constraints (18) and (20) include quadratic terms \(x_{u}^{t} x_{u'}^{t'}\). However, since this quadratic term is the product of two binary variables, it can be linearized easily by replacing the this term with a new binary variable \(y_{u,u'}^{t,t'}\) where \(y_{u,u'}^{t,t'}= x_{u}^{t} x_{u'}^{t'}\) and by adding the following constraints.
Inequalities (24) and (25) ensure that \(y_{u,u'}^{t,t'}\) is zero if either \(x_{u}^{t}\) or \(x_{u'}^{t'}\) are zero.
Inequality (26) ensures that \(y_{u,u'}^{t,t'}\) takes value 1 if both binary variables \(x_{u}^{t}\) or \(x_{u'}^{t'}\) are set to 1. In our simulation, this linearization technique is done implicitly by the linear solver CPLEX [30].
Solving LCGMEC model:
This section discusses the steps involved in solving the LCGMEC model formulated in “Media edge cloud resource allocation approach” section.
Solving linear relaxation of the problem
We use the following Column Generation method to generate an embedding solution for media cloud requests.
ColumnGenerationProcedure():

1.
Denote LP(M) the continuous relaxation of the master problem ILP(M) obtained by exchanging the integrality constraint (12) by λ _{ c }∈ [ 0,1] for any c∈C.

2.
Initialize LP(M) by a dummy subset, that is, a set of artificial IMCCs with a zero cost.

3.
Solve the linear relaxation LP(M) of the master problem optimally using the CPLEX solver. Then go to step 4.

4.
Solve optimally the pricing problem as follows:

(a)
First, solve the pricing problem using a heuristic developed based on K shortest path and the technique of node/link stress function proposed in [14].

(b)
If this heuristic generates a new column with a negative reduced cost, go to step 5.

(c)
Otherwise, Solve exactly the pricing problem using the CPLEX solver. Then go to Step 5.

(a)

5.
If a column with a negative reduced cost has been found, add this column to the current master problem and repeat Steps 3 and 4. Otherwise, the master problem is optimally solved.
The optimal solution of LP(M) only provides a lower bound on the optimal integer solution ILP(M). To derive an integer VN embedding solution, we use the following approach.
LCGMECB&B approach

Remove relaxation on variable λ _{ c }.

Apply the classic BranchandBound CPLEX procedure on the optimal solution of the linear relaxation LP(M) generated using ColumnGenerationProcedure().
Complexity analysis
How often Cplex solver is used?
The use of the Column Generation technique means that the MEC mapping problem is divided [29] into a master problem (which includes constraints related to the availability of substrate resources) and a pricing problem (which includes the constraints related to the embedding resources used for granted MEC requests). The problem becomes one of generating an IMCC that improves the current value of the master objective function. To check the optimality of a solution of the LP(M) master model, a subproblem called the pricing problem is solved to try to identify new columns (IMCC configurations) with negative reduced cost.
At each new iteration of the column generation process, the master problem is solved to optimality using a CPLEX solver to guarantee the optimality of the solution obtained at the previous iteration [29]. A CPLEX solver is used to solve the pricing problem only if the heuristic approach based on kshorted path is unable to find a new IMCC configuration (column) with a negative reduced cost. Accordingly, CPLEX is used only very infrequently to solve the pricing problem at the last iterations of the Column Generation process and to prove the optimality of the mapping solution. By so doing, we speed up the column generation approach and we guarantee the optimality of the LP(M) solution.
Would determining all IMCCs in an online manner incur significant latency?
The Column Generation approach addresses the high computation time of the MILP problem by dividing the MEC embedding problem into a set of subproblems. A subproblem involves embedding a small number of MEC requests. The solution is represented by an IMCC configuration. Enumerating all IMCCs takes a huge amount of time: there are simply too many, an exponential number. The key concept of the Column Generation optimization approach is that there is no need to enumerate all IMCC configurations. Only a few of them are used to serve the MEC requests and the subproblems track and generate them. The solving of linear relaxation LP(M) of the master problem chooses a maximum of N IMCCs each time to serve the N MEC requests. The solving of LP(M) is done in polynomial time [33] since the number of generated columns is quite small.
Numerical results
Simulation benchmark
To better illustrate the efficiency and superior performance of the Column Generation approach LCGMEC, we compare our proposal to the performance of three wellknown virtual network embedding algorithms found in the literature, using a well defined set of metrics.

Twophase mapping approach (2PhaseMapping) [14], which first preselects the mapping of hosting nodes and then maps virtual links.

Bin packing [11] (BP), where hosting and network requirements are mapped using a Bin per type of media cloud resource. The Bin packing in [11] introduces a method for forming and classifying Bins based on the resources available. Using the information of Bin classification, the incoming requests are mapped accordingly. The pseudo code for mapping of incoming requests using Bin packing is shown in Algorithm 1.

Greedy node mapping combined with a Kshortest path algorithm for the link mapping phase (MultiSite) [18]. The main difference between the greedy approach and Bin packing is that we classify Bins based on the resources. With the greedy approach, the model follows some of the wellknown queue approaches such as First In First Out. We arrange the incoming requests in ascending order of their cost. Cost is defined by the function described in equation (2). The requests then are mapped to the MECDCs infrastructure. The pseudocode for the resource allocation is shown in Algorithm 2.
Two main metrics differentiate our proposals from the benchmarks. These are (1) the applied MEC request mapping approach, i.e., oneshot vs. twophase embedding, and (2) smallbatch vs. online mapping. To highlight the advantages of the oneshot node and link embedding approach, we compare our proposal to the 2PhaseMapping embedding approach. To evaluate the performance of smallbatch vs. online embedding, we use BP and MultiSite as benchmarks.
Experiment setup
To evaluate the efficiency of the LCGMEC model, we carried out experiments using IBM CPLEX solver [30]. The experiments were conducted with a physical infrastructure of 10 MECDCs interconnected through a ring metro WDM optical network topology [20]. For each media cloud request, virtual nodes, between 2 and 20 in number, are randomly but uniformly distributed. The minimum connectivity degree is fixed to 2 links. QoS requirements are randomly determined by a uniform distribution among the following QoS classes [31]:

1.
Highlevel delay sensitivity 1 (OnLine Gaming): requires High bandwidth and low latency.

2.
Highlevel delay sensitivity 2 (High Definition Telepresence): requires high bandwidth connection and storage due to high data volume and GPU for realtime audiovisual processing to support a high level of immersion and natural interaction between participants, as in facetoface meetings.

3.
Midlevel delay sensitivity 1 (Live video streaming e.g., sport event): requires minimum latency and high CPU/GPU powers.

4.
Midlevel delay sensitivity 2 (Office applications e.g., CRM solutions): requires high storage capacity.

5.
Looselevel delay sensitivity (Yahoo and Google Mail): requires loose sensitive delay and high storage capacity.
Bandwidth/CPU/GPU/memory/storage unit cost, are expressed in terms of $X, which represents the price of 1 Mb of bandwidth or 1 unit of CPU/GPU/GB.
Performance evaluation metrics
In our experiments, we evaluated the following metrics.

1.
Mapping Cost: The cost of the MECDCs resources used.

2.
Media cloud demands’ blocking ratio: Blocking ratio measures the overall number of rejected MEC requests at each embedding period. It is the ratio of the number of rejected requests to the overall number of requests. While it gives a sense of how well an algorithm is performing, it cannot completely capture the performance and customer satisfaction, as these depend on the quality and the cost of the service. In fact, depending on the cost of MEC requests, it is possible for 10% of MEC requests, as an example, to provide a revenue equivalent to that offered by the remaining requests.

3.
Wavelength utilization: The average ratio between the used and the overall amounts of available wavelength bandwidth.

4.
CPU/GPU/Storage utilization: The ratio between the used and the overall available amounts.

5.
Average number of hops: The average number of hops per mapped virtual link.
Evaluation results
This section describes the performance of the LCGMEC approach compared to related works in terms of resource usage. We also analyze the key factors that impact the optical transmission network among MECDC locations (wavelength grooming factor, number of wavelengths, and the network linking topology (i.e., UPSR vs. BLSR)).
Efficiency of lightpath resource allocation approach
We first study the performance of the proposed LCGMEC model compared to the benchmarks in terms of mapping cost, media cloud blocking ratio, CPU, GPU storage and wavelength bandwidth usage.
Figure 4a plots the cumulative CP mapping cost against the allocation time periods. It compares the mapping cost for LCGMEC and for benchmark models BIN, 2PhaseMapping and Multisite. The results show that the LCGMEC model provides the lowest cost by approximately 37% compared to MultiSite, which had the highest cost during the simulation run period 10. The cost for the greedy approaches, MultiSite and BP, is high because they inherently map requests without optimizing. By contrast, the 2phasemapping solution is solved for optimization, resulting in minimal cost.
Figure 4b plots the blocking ratio of media cloud requests against the allocation time periods. The Bin and TwoPhaseMapping approaches show a major blocking ratio on some periods and a higher cumulative cost. MultiSite provides the lowest blocking ratio, but at the highest cost. This is because, with the exception of LCGMEC, no approach performs global optimization. Therefore, the solution determined at the node phase cannot be satisfied during the lightpath mapping phase because resources in the selected paths are not available. On the other hand, with LCGMEC, requests are blocked because the incoming request requirements were beyond those the infrastructure service provider could satisfy at that time.
For every period, several combinations of different classes of traffic are generated to mimic the real traffic. This explains the jagged nature of the substrate resource graphs (Fig. 5 and 6). In other words, if, in period one, 10% of the traffic is class 1 (online gaming traffic) and the remaining 90% of the traffic is class 2 (mail traffic), the resource requirement is lower and acceptance is higher compared to a scenario where 80% of the traffic is class 5 and remaining 20% is class 1.
Figure 5a shows the wavelength use of the selected approaches compared to LCGMEC. A higher wavelength use in an optical network can be related to higher link throughput. The higher the wavelength use, the better the system. At the same time, care needs to be taken that selected paths for the requests do not perform too many OEO conversions (more detail is given in Impact of number of wavelengths per optical Fiber section). From the results, it is clear that 2phase mapping has the least bandwidth use. Although 2phase had better acceptance, the wavelength use was poor. This is due to an improper selection of requests accepted for the embedding. The number of requests could be higher but the revenue generated from the request leads to too much wastage of resources. The LCGMEC had the highest overall wavelength use because an initial global optimization was performed. This solution yielded the overall optimal solution in terms of the highest use of link resources at the lowest cost to the service providers. A similar conclusion can be drawn from the other approaches, BP and multisite, where lack of optimization and a oneshot solution can lead to poor use of link resources. Hence, we can ascertain from Fig. 5a that a global optimization, if designed with proper constraints, provides better resource use and less complexity.
Figures 5b, 6a and b show the QoS resource usage in different approaches. Although in Fig. 6a and 6b, 2phasemapping had better overall use of GPU and storage resources, we can see that LCGMEC had better (i.e. least) use of wavelength, CPU, GPU and Storage. The worst performance was by BP, because of its firstcomefirstserved approach.
Analysis of network dimensioning key factors
Below, we analyze the key factors that impact MECDCs resource utilization and uses QoE i.e., acceptance ratio, number of hops, and mapping cost.
Impact of grooming factor
Figures 7a, b and c plot the variation of mapping cost, the average number of hops and the wavelength bandwidth use against the grooming factor (the number of client signals that can be loaded on a given wavelength). The figures show the impact of the grooming factor on the LCGMEC model in terms of bandwidth wavelength use and the average number of hops. These two parameters have a big impact on latency, the most stringent QoS requirement. The grooming factor, in fact, needs to be kept as low as possible in order to avoid congestion and to guarantee an acceptable degree of latency in multimedia applications. These figures show grooming factor values that provide the optimal average number of hops per service request link as well as the optimal wavelength use. In other words, the results of the simulation illustrate the adjustment of the grooming factor with respect to expected wavelength bandwidth use and the average number of hops needed to keep the latency below accepted values.
Impact of number of wavelengths per optical Fiber
Figures 7d, 8a and b plot the variation of blocking ratio, mapping cost and average number of hops against the number of wavelengths used per optical fiber link. First, the blocking ratio is clearly closely related to the number of wavelengths used to transmit service request data among MECDC locations. The blocking ratio actually decreases as the number of wavelengths increases. Second, these figures show that, for a given demand pattern, increasing the number of wavelengths beyond a certain value has no impact on mapping cost and the average number of hops. For example, when the number of wavelengths equals 5, the mapping cost and the average number of hops are quite constant. These results help to optimize capital expenditures while honoring the QoS requirements inherent in media cloud demand. Also, the number of hops is crucial in an optical network as it might lead to OEO conversion. Care needs to be taken to reduce OEO conversion as the system strives to increase its link resource use.
Impact of linking topology
Figures 9a, b and c plot respectively the average number of hops, blocking ratio and mapping cost against the allocation time periods. The results show the impact of the network topology on the performance of the proposed LCGMEC approach. UPSR and BLSR topology are the most widely used configurations in optical networks. We therefore studied mapping performances in comparison to UPSR vs BLSR topologies. The results show that BLSR topology reduces the average number of hops per mapped virtual link. However, both topologies provide almost equal mapping costs and blocking ratios. We can conclude that the use of UPSR and BLSR has a minimal effect on the initial acceptance or performance of the user requests. On the other hand, UPLSR and BLSR have an impact in providing resilience to the accepted requests. Nonetheless, the proposed model can be used with both UPSR and BLSR metro networks.
Simulation CPU time
Only one network configuration is used in our simulation and this may limit the generalization of the results. It is hard to give a clear relationship between CPU time and network parameters (# nodes, # links) and traffic pattern/volume. However, although there might be a significant impact in increasing the network parameters (nodes, links) on simulation CPU time, our Column Generation approach calculates a feasible and nearoptimal solution by heuristically solving the pricing problem with a large network and a large number of requests. As for any network design problem, it is expected that the larger the numbers of network nodes and links, the longer the CPU time. There is a certain tradeoff between reducing the simulation CPU time and calculating the optimal solution. In addition, the current solution is suggested for metro optical ring topology where the number of nodes/links and the number of requests are relatively less important than with meshed and largescale networks. According to the simulation results, the LCGMEC decomposition approach showed a higher computation time than the benchmarks. The two benchmarks showed a computation time of a few seconds to a few minutes and LCGMEC a computation time varying from 10 to 20 minutes. However, we showed that Column Generation LCGMEC decreased the MEC mapping cost by 37% and blocking ratio by 15% on average and that MEC resources are used more efficiently.
One might ask whether the significant difference in computation time between the benchmarks and LCGMEC is reasonable. Yes, it is. The benchmarks use a heuristic approach, which means using a local search algorithm to browse for feasible solutions. Only a small number of possible solutions are examined, so computation time is short. But the heuristic approach is unable to indicate how far the solution obtained is from the optimal one. However, LCGMEC is based on an exact approach (ILP modelling) using a global search algorithm to browse for feasible solutions. Accordingly: (a) the computation time increases significantly with the size of the space, and (b) an exact algorithm can consume a lot of memory, also leading to a high computation time. Nevertheless, the LCGMEC approach provides the gap between the obtained solution and the optimal one, even when the algorithm is stopped before completion. It can even prove optimality, if the integer embedding solution is equal to the optimal lower bound provided by the linear relaxation of the master problem LP(M).
Computation times are obtained using a CPLEX solver 12.3 and an Intel Duo Core Dell machine running Windows 7 Enterprise. Since LCGMEC uses a BranchandBound algorithm to find an integer solution, we cannot claim that we have calculated the optimal solutions. However, the solutions are still satisfactory compared to those obtained using benchmark models. In addition, the difference between the value of the incumbent integer solution of I L P(M) model and the optimal value of the linear relaxation L P(M) is smaller than 5%. This is satisfactory, given that most proposals in the literature are heuristic or based on twophase mapping and that oneshot node and link embedding is a NPhard problem. For an optimal solution, the LCGMEC approach can be combined with branchandprice procedure, where branching rules can be properly defined to avoid generating a huge number of pricing problems. Branching can be done either on the variables of the master problem using cuts, or on the variables of the pricing problem, using a classic branchandbound procedure or cuts. We propose to examine this technique in our future work.
Conclusion
Processing, transmitting and storing media data in MECDCs can enhance the QoE of an end user in terms of latency, acceptance ratio, reliability and cost. The use of MECDCs compared to traditional DCs for media requests has known limitations: limited resource availability at a given DC, and a high resource requirement. To efficiently manage both the network and the MECDCs resources, we proposed a CGbased oneshot model. For each accepted request, an optimal oneshot networking and hosting scheme is calculated to ensure QoS requirements. We also analyzed the key factors that impact the network among MECDC locations: wavelength grooming, wavelength bandwidth capacity, the number of wavelengths per optical link, the number of MSPP transponders and the impact of network topology (UPSR vs. BLSR). Simulation results proved that the LCGMEC approach performed significantly better compared to the benchmark approaches from the literature.
As a future improvement, we would like to consider Dense Wavelength Division Multiplexing factors and study their impact on QoE of multimedia application users.
Abbreviations
 IaaS:

Infrastructure as a service
 CS:

Compresssive sensing
 CP:

Cloud service providers
 LCGMEC:

Large scale column generation media edge cloud
 MECDC:

Media edge cloud data centers
 MSPP:

Multi service provisioning platform
 QoE:

Quality of experince
 SLA:

Service level agreement
 SLA:

Service level agreement
 QoS:

Quality of service
 UGC:

User generated content
 VM:

Virtual machine
 VN:

Virtual network
 WDM:

Wavelength division multiplexing
 WMN:

Wireless mesh network
References
Xu H, Li B (2012) A general and practical datacenter selection framework for cloud services. IEEE 5th International Conference on Cloud Computing (CLOUD), Honolulu, HI, USA. pp. 9–16.
Fesaehaye D, Gao Y, Nahrstedt K, Wang G (2012) Impact of Cloudlets on interactive mobile cloud application, 123–132.. IEEE 16th Conference International Enterprise Distributed Object Computing (EDOC), Beijing, China.
Endo P, et al. (2011) Resource allocation for distributed cloud: concepts and research challenges. IEEE Network 25(4).
Zhang Q, Zhani M, Jabri M, Boutaba R (2014) Venice: Reliable virtual data center embedding in clouds. INFOCOM:289–297.
Hou W, Guo L, Liu Y, Yu C, Zong Y (2015) Resource management and control in converged optical data center networks: survey and enabling technologies. Computer Networks 88:121–135.
Bari DF, et al. (2013) DataCenter Network Virtualization: A Survey, Communications Surveys and Tutorials 15(2):909–928.
Pillai P, Lewis G, Simanta S, Clinch S, Davies N, Satyanarayanan M (2013) The Impact of Mobile Multimedia Applications on Data Center Consolidation. IEEE International Conference Cloud Engineering (IC2E), Redwood City, CA,USA. pp 166–176.
Zhu W, Luo C, Wang J, Li S (2011) Multimedia cloud computing. IEEE Signal Process Mag 28:59–69.
Peng S, Nejabati R, Simeonidou D (2013) Role of Optical Network Virtualization in Cloud Computing. IEEE/OSA J Opt Commun Netw 5(10):162–170.
Jarray A, Jaumard B, Houle AC (2010) Reducing the CAPEX and OPEX Costs of Optical Backbone Networks. IEEE International Conference on Communications (ICC), Cape Town, South Africa.
EyraudDubois L, Larcheveque H (2013) Optimizing Resource allocation while handling SLA violations in Cloud Computing platforms. IEEE 27th International Symposium on Parallel & Distributed Processing (IPDPS), Boston, MA, USA.
Zeng LZ, Ye X (2012) Multiobjective Optimization Based Virtual Resource Allocation Strategy for Cloud Computing. IEEE/ACIS 11th International Conference on Computer and Information Science (ICIS), Shanghai, China.
Nonde L, et al. (2014) Green Virtual Network Embedding in Optical OFDM Cloud Networks. 16th International Conference on Transparent Optical Networks (ICTON), Graz, Austria.
Zhu Y, Ammar MH (2006) Algorithms for Assigning Substrate Network Resources to Virtual Network Components. IEEE Infocom:1–12.
Chowdhury NMK, Rahman MR, Boutaba R (2012) ViNEYard:Virtual Network Embedding Algorithms With Coordinated Node and Link Mapping. ACM/IEEE Transaction on Networking 20(1):206–2019.
Thompson K, Miller GJ, Wilder R (1997) WideArea Internet Traffic Patterns and Characteristics. IEEE Network 11(6):10–23.
Jarray A, et al. (2012) DDP: A Dynamic Dimensioning and Partitioning model of Virtual Private Networks resources. Comput Commun 35:906–915.
Nan X, He Y, Guan L (2012) Optimal Resource Allocation for Multimedia Application Providers in Multisite Cloud. IEEE International Symposium on Circuits and Systems (ISCAS), Beijing, China.
Tzanakaki A, et al. (2011) Energy Efficiency in integrated IT and Optical Network Infrastructures: The GEYSERS approach. IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), Shanghai, China.
Gokhale P, Kumar R, Das T, Gumaste A (2010) Cloud computing over metropolitan area WDM networks: the lighttrails approach. IEEE Global Telecommunications Conference (GLOBECOM 2010), Miami, FL, USA.
Tzanakaki A, Anastasopoulos MP, Zervas GS, Rofoee BR, Nejabati R, Simeonidou D (2013) Virtualization of heterogeneous wirelessoptical network and IT infrastructures in support of cloud and mobile cloud services. IEEE Commun Mag 51(8):155–161.
Hou W, Guo L, Liu Y, Song Q, Wei X (2013) Virtual Network Planning for Converged Optical and Data Centers: Ideas and Challenges. IEEE Network 27(6):52–58.
Anastasopoulos MP, Tzanakaki A, Simeonidou D (2015) Scalable services provisioning in converged optical wireless clouds using compression techniques. OFC.
Develder C, Buysse J, Dhoedt B, Jaumard B (2014) Joint dimensioning of server and network infrastructure for resilient optical grids/clouds. IEEE/ACM Trans Netw 22(5):1591–1606.
Gong X, Guo L, Shen G, Tian G (2017) Virtual Network Embedding for Collaborative Edge Computing in OpticalWireless Networks. in J Lightwave Tech 35(18):3980–3990.
Hou W, Ning Z, Guo L, Chen Z, Obaidat MS (2017) Novel Framework of RiskAware Virtual Network Embedding in Optical Data Center Networks. in IEEE Syst J PP(99):1–10.
Eppstein D (1998) Finding the shortest paths. SIAM J Comput 28(2):652–673.
Anderson DG (2002) Theoretical approaches Approaches to Node Assignment. Unpublished Manuscript, available at: https://www.cs.cmu.edu/~dga/papers/andersenassignabstract.html. Accessed Dec 2017.
Ubbecke ME, Desrosiers J (2005) Selected Topics in Column Generation. Operations Res 53:1007–1023.
IBM ILOG Cplex 12.6.1, User Manual. http://pic.dhe.ibm.com/infocenter/cosinfoc/v12r4/topic/ilog.odms.studio.help/pdf/usrcplex.pdf. Accessed Dec 2017.
Ceselli A, Premoli M, Secci S (2015) Cloudlet Network Design Optimization. IFIP.
Dean J, Ghemawat S (2008) Magazine Communications of the ACM  50th anniversary issue: 1958  2008 51(1):107–113.
Schrijver A (1986) Theory of Linear and Integer Programming. Wiley, New York.
Funding
This work was supported by a Grant from Natural Sciences and Engineering Research Council of Canada. The funding agency is not involved in this research.
Availability of data and materials
Not applicable.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Additional information
Authors’ Contributions
This paper describes a collaborative project between the University of Ottawa, Canada, and the University of Paris Descartes, France. AJ: Design, modelling, analysis and interpretation of data, prepare first draft of the paper. AK: Design, revise and give final approval of paper. JS: Implementation, simulation, analysis and interpretation of data, drafted the paper. JE: Design and revise the paper. AM: Design and final approval of the paper. FZ: Draft and revise the paper. All authors have read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Jarray, A., Karmouch, A., Salazar, J. et al. Efficient resource allocation and dimensioning of media edge clouds infrastructure. J Cloud Comp 6, 27 (2017). https://doi.org/10.1186/s1367701700997
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1367701700997
Keywords
 Media cloud service
 Media edge data center
 Resource allocation
 Cloud provider
 Optical network
 Linear programming
 Column generation