Skip to main content

Advances, Systems and Applications

Joint optimization strategy for QoE-aware encrypted video caching and content distributing in multi-edge collaborative computing environment


The video request service of users in 5G network will explode, and adaptive bit rate technology can provide users with reliable video response. Placing video resources on edge servers close to users can overcome the problem of excessive network load similar to traditional centralized cloud platform solutions. Moreover, multiple edge servers can provide caching and transcoding support by collaboration mechanisms, which further improves users’ Quality of Experience (QoE). However, the design difficulty of video caching and content distribution strategies is increased due to the diversity of collaboration mechanisms and the competition between local and collaborative services of edge servers for computing and storage resources. In order to solve this problem, video cache and content distribution problem is modeled as random integer programming problem in the multi-edge server at most two-hop collaboration scenario. In order to improve the security of video data transmission, the video stream is encrypted using an encryption algorithm based on Logistic chaotic-Quantum-dot Cellular Automata (QCA). For improving the efficiency of solving integer programming problems, this paper uses a pyramid intelligent evolution algorithm based on optimal cooperation strategy to solve this problem. Simulation experiments show that our proposed method can obtain higher QoE value compared with several newer methods. In addition, the average access delay of proposed method is shortened by more than 27.98%, which verifies its reliability.


In 5G networks, video services will become the mainstream, and the contradiction between explosive growth of data volume and QoE is becoming increasingly prominent [1]. In other words, in a 5G network, when users initiate a request for a certain bit-rate video resource, remote storage devices must respond to users by codec operations within the shortest possible time [2]. Therefore, research on QoE-aware video caching and content distribution technology has attracted extensive attentions from academia and industry.

Due to differences in user own hardware processing capabilities, network channel conditions, etc., different users usually request video files of different quality from remote video storage devices [3]. According to behavioral characteristics of users, Adaptive Bit Rate (ABR) technology is widely used in video services to improve QoE of users [4, 5].

There are obvious differences in user own hardware processing capabilities and network channel conditions, the energy carried by user terminals such as mobile phones and tablet computers is often limited. Therefore, it is generally not used to superimpose and decode multiple coding layers on the user side to obtain the bit rate video required by users, such as Scalable Video Coding (SVC) technology [6]. Conversely, for same video files, the more common implementation of ABR is that the remote end is first encoded as a video version with different formats and resolutions. Then, according to users’ request and network conditions, a variant file is selectively send to users [7, 8]. This method uses “storage/compute-transmit” mode, and its advantage is that the decoding overhead of mobile terminals is avoided, thereby saving corresponding energy consumption. However, the corresponding disadvantage is that when the number of users is very large and video request operations are frequent, the video traffic in network will show an explosive growth. This will cause the transformation time of video files to be too long and QoE to drop.

Further, the available technical routes can be divided into centralized cloud computing solutions and decentralized edge collaborative computing solutions in the ABR technology adopting “storage/computing-transmission” mode. In centralized cloud computing solutions, video storage and codec operations are all implemented on the remote cloud platform. Users need to obtain corresponding video resources from the far ends [9, 10]. In decentralized edge collaborative computing solutions, video files with different bit rates and formats are stored on edge computing devices closer to users. In addition, different edge computing devices can implement ABR functions by cooperative codec and resource interaction [11, 12]. Compared with centralized cloud computing solutions, decentralized edge computing solutions can effectively reduce video traffic overhead in the network. Since edge computing devices are closer to the user side, the QoE of users is correspondingly higher [13]. However, it should be pointed out that the computing resources and storage resources of edge computing devices are often limited. A single edge computing device cannot satisfy the explosive growth of video requests in 5G networks [14]. However, the optional response methods of edge computing devices are very diverse when faced with the same video request. For example, local nodes cache and directly transmit, neighbor nodes cache and transmit, neighbor nodes decode and then cache and transmit, and neighbor nodes cache and decode and transmits, which undoubtedly increases the difficulty of designing video caching and content distribution mechanisms. This paper mainly design a video cache and content distribution optimization strategy that is QoE-aware in a multi-edge collaborative computing environment.

Related works

Considering the hardware resources and energy consumption requirements of user terminals, traditional SVC coding technology [6] is the representative, and passive ABR technology that requires users to perform the decoding operation is no longer suitable for video services in 5G networks. On the contrary, another kind of resource caching and encoding/decoding operation performed by remote resource storage devices has been welcomed by academia and industry. The basic principle is that, using the “storage/computing-transmission” mode, remote resource storage devices adaptively provide video streams corresponding to the bit rate according to user requests and network conditions [15, 16]. There are two main technical solutions available, namely centralized cloud computing solutions and decentralized edge collaborative computing solutions in the “storage/computing-transmission” mode.

Centralized cloud computing solution: All video files are stored on a remote cloud platform. Users obtain video resources with a specific bit rate by wide area networks. Many references discuss the design of video caching and content distribution mechanisms under the cloud computing framework. For example, under the framework of centralized cloud computing, physical cache and virtual cache were designed to respond to video file storage and online transcoding computing respectively in reference [17]. Reference [18] proposed a rate adaptation algorithm that uses video characteristics to simultaneously change video encoding and transmission rate, which improves the amount of video resources that the network can tolerate. Reference [19] modeled the cache management of video stream files as a constrained optimization problem considering the server storage resource constraints. Reference [20] verified an online architecture capable of using Docker for real-time video transcoding in a cloud environment based on Kubernetes. The random forest regressor used in this framework provided the best overall performance in terms of transcoding speed, resource CPU consumption and accuracy of the number of transcoding tasks implemented, but the work efficiency of reinforcement learning is low. However, in the centralized cloud computing mode, user video requests are sent to users after caching and encoding/decoding operations on the cloud platform. Thus, in order to ensure users’ QoE, the cloud computing model has high requirements on network bandwidth, hardware conditions of storage and computing equipment, and has the disadvantage of high construction and maintenance costs [21].

Decentralized edge computing solution: In this solution, video files with different bit rates and formats are first stored on multiple edge computing devices close to the user side. When a user requests a video service with a certain bit rate and format, the edge computing device implements video caching, codec, and transmission operations in a cooperative manner [22, 23]. Compared with the centralized cloud computing model, the network backhaul time between edge computing devices and users is shorter, and the hardware performance requirements are also lower. But it should be pointed out that the implementation of ABR technology is diverse under edge computing schemes. For example, for the same video file request, edge computing devices can directly transmit by local caches, cache from the neighbor nodes and then transmit, decode from the neighbor nodes and then cache and then transmit, and cache from the neighbor nodes and then decode and then transmit. Similar work was shown in [24]. Summary, a video cache and content distribution mechanism with good performance must be flexibly adjusted according to network conditions, network topology, edge device working status, etc. to obtain satisfactory QoE in a multi-edge collaborative computing environment [25].

The existing work has carried out preliminary research on the video caching and content distribution mechanism in multi-edge collaborative computing environment, and has achieved some beneficial results. For example, reference [26] proposed an adaptive wireless video transcoding framework in the emerging edge computing mode to achieve more detailed video transcoding. However, this solution inevitably increased the further occupation of computing resources while tracking changes in traffic. Reference [27] considered the collaboration between multiple edge servers. However, there was no cooperative transcoding service between edge servers. In other words, all video transcoding operations were performed on local servers, which requires high server computing performance and storage capacity. Reference [28] considered the collaborative caching and transcoding between edge servers. However, this solution only considered single-hop mode in which edge servers perform video caching and transcoding between only two edge servers.

Based on the above analysis, this paper proposes a multi-edge collaborative video caching and content distribution mechanism based on random integer programming. The main innovations are as follows:

  1. 1)

    Considering that video caching and content distribution can complete caching and transcoding operations on different edge servers, a video caching and content distribution mechanism including two-hop cooperation of edge servers is proposed. Compared with the traditional single-hop cooperation mode, it only considers two edge servers and the considered caching and distribution mechanism is more general;

  2. 2)

    Based on the stochastic optimization method, with the user QoE as optimization goal, video cache and content distribution problems are modeled as random integer linear programming problems. Among them, the video cache in edge devices fully takes into account the popularity of videos, which further improves the hit rate of video storage and the corresponding QoE.

  3. 3)

    In order to improve the security of video data transmission, the video stream is encrypted using an encryption algorithm based on Logistic chaotic-Quantum-dot Cellular Automata (QCA).

  4. 4)

    In order to improve the efficiency of solving integer programming problems, a pyramid structure intelligent evolution algorithm based on optimal cooperation strategy is proposed to solve the problem.

System model

As shown in Fig. 1, let the number of edge servers be N. It can be cached from the video resource library in remote servers, and can also perform codec operations. Two-way data exchange between high-speed links between edge servers. At the same time, it can also be directly linked to remote servers by the backhaul link. The cache and codec operations of edge servers are subject to scheduling and control of the control center.

Fig. 1
figure 1

Video caching and content distribution model in a multi-edge collaborative computing environment

Video coding encryption technology

The rapid development of communication technology provides users with diversified and differentiated video requirements. At the same time, the importance of video information transmission security to both video providers and users cannot be ignored. At present, the industry generally uses H.264/AVC encoding standard to compress and transmit videos with low distortion. This paper uses an encryption algorithm based on Logistic chaotic-Quantum-dot Cellular Automata (QCA) [29] to encrypt video coding.

The flow of Logistic chaotic-QCA key generation algorithm is shown in Fig. 2. Among them, Quantum Cellular Neural Network (QCNN) matrix A is obtained by QCA using Logistic chaotic system for h consecutive iterations. h is a multiple of 512, and Logistic chaotic system is shown in formula (1).

$$ {X}_{h+1}=f\left({X}_n\right)=\varPsi {X}_n\left(1-{X}_n\right) $$

where Ψ(0, 4), Xh + 1(0, 1), h = 1, 2, 3, …; Xn (n = 1, 2, 3, 4) is the QCA state vector, which satisfies

$$ \Big\{{\displaystyle \begin{array}{c}{\dot{X}}_1=-2{\omega}_{01}\sqrt{1-{X}_1^2}\sin {x}_2\\ {}{\dot{X}}_2=-{\omega}_{02}\left({X}_1-{X}_3\right)+2{\omega}_{01}\frac{X_1}{\sqrt{1-{X}_1^2}}\cos {X}_2\\ {}{\dot{X}}_3=-2{\omega}_{03}\sqrt{1-{X}_3^2}\sin {X}_4\\ {}{\dot{X}}_4=-{\omega}_{04}\left({X}_3-{X}_4\right)+2{\omega}_{03}\frac{X_3}{\sqrt{1-{X}_3^2}}\cos {X}_4\end{array}} $$

where x1, x3 are the polarizability; x2, x4 are the quantum phases; ω01, ω03 are the coefficients proportional to the energy between points in each cell; ω02, ω04 are the weighted effects of difference in the polarizability of adjacent cells coefficient.

Fig. 2
figure 2

Logistic-QCA encryption process

Suppose the generated matrix A satisfies

$$ A=\left[\begin{array}{ccccc}{X}_{11}& {X}_{12}& \cdots & {X}_{1\left(n-1\right)}& {X}_{1n}\\ {}{X}_{21}& {X}_{22}& \cdots & {X}_{2\left(n-1\right)}& {X}_{2n}\\ {}{X}_{31}& {X}_{32}& \cdots & {X}_{3\left(n-1\right)}& {X}_{3n}\\ {}{X}_{41}& {X}_{42}& \cdots & {X}_{4\left(n-1\right)}& {X}_{4n}\end{array}\right] $$

Split matrix A into matrix B composed of the first 3 rows of elements and matrix C composed of the last row of elements, that is

$$ B=\left[\begin{array}{ccccc}{X}_{11}& {X}_{12}& \cdots & {X}_{1\left(n-1\right)}& {X}_{1n}\\ {}{X}_{21}& {X}_{22}& \cdots & {X}_{2\left(n-1\right)}& {X}_{2n}\\ {}{X}_{31}& {X}_{32}& \cdots & {X}_{3\left(n-1\right)}& {X}_{3n}\end{array}\right] $$
$$ C=\left[{X}_{41}\kern0.5em {X}_{42}\kern0.5em \cdots \kern0.5em {X}_{4\left(n-1\right)}\kern0.5em {X}_{4n}\right] $$

The matrix B is further processed and converted into a row vector to form a key sequence, denoted as S, as shown in the following formula

$$ S=\left\{{S}_1,{S}_2,\dots, {S}_{3\times \mathrm{n}}\right\}=\left\{{X}_{11},{X}_{12},\dots, {X}_{21},{X}_{22},\dots, {X}_{3\left(n-1\right)},{X}_{3n}\right\} $$

The elements in S are in order, and each 512 forms a group to form a chaotic sequence pool H, as shown in Eq. (7).

$$ H=\left\{{h}_1,{h}_2,\dots, {h}_L\right\} $$
$$ \Big\{{\displaystyle \begin{array}{c}{h}_1=\left\{{S}_1,{S}_2,...,{S}_{512}\right\}\\ {}{h}_2=\left\{{S}_{513},{S}_{514},...,{S}_{1024}\right\}\\ {}\vdots \\ {}{h}_L=\left\{{S}_{\left(L-1\right)\times 512+1},{S}_{\left(L-1\right)\times 512+2},...,{S}_{L\times 512}\right\}\end{array}} $$

where L is an integer between (0, 3n/512].

For the elements in sequence C, the index sequence Index is generated according to formula (9):

$$ Index=\mathrm{map}\ \min\ \max \left(C,0,1\right) $$

where map min max(C, 0, 1) means to map the value in sequence C to interval [0, 1].

Index(i) and Index(j) are selected from matrix Index as the initial values and Index_Log1 and Index_Log2 are generated by perform Logistic transformation. And further we round up according to formula (10) to map to the integers IndexC_Log1 and IndexC_Log2 in interval [1, L].

$$ \Big\{{\displaystyle \begin{array}{c} IndexC\_ Log1=\left\lceil Index\_ Log1\times L\right\rceil \\ {} IndexC\_ Log2=\left\lceil Index\_ Log2\times L\right\rceil \end{array}} $$

Finally, the key generation is selected from key sequence S according to IndexC_Log1 and IndexC_Log2, and the key is obtained by comparing bit by bit according to formula (11), until a 512-bit complete key is obtained.

$$ Key={S}_{Index\_ Log1}\ge \left({S}_{Index\_ Log2}?1:0\right) $$

The original video is encoded with H.264/AVC and contains two types of compressed video data and residual data [30]. In order to improve the reliability of video transmission, this paper uses different keys to encrypt these two types of data. That is, for the compressed video data, the first 256 bits of the Key are used for encryption. For residual data, the last 256 bits of the Key are used for encryption.

Multi-rate video cache model

Let collection V = {1, 2, …, s, …, S} be the video collection. Each video can be encoded into M different versions of files. The definition set vs = {vsm| m = 1, 2, …, M} is represented as a variant set of the s video file. For video files, we can use bit rate and playback duration to characterize. In addition, note that for the same video, the playback duration of different variant files is the same. Therefore, video file vsm can be described by the following binary vector, namely

$$ {v}_{sm}:\left({r}_{sm},{l}_s\right) $$

where rsm and ls are the bit rate and playing time of vsm respectively.

Without loss of generality, let each variant file in vs be stored in ascending order of bit rate, that is, rs1 < rs2 < … < rsM. Besides, low bit rate files can be transcoded from high bit rate files. Let each edge server have a cache user of size C to store a copy of the video, and C is greater than the size of the video corresponding to maximum bit rate, i.e.

$$ C=\alpha {r}_{sM} $$

where α is a coefficient greater than 1.

Based on the above analysis, first it assume that the cache capacity of each edge server is limited. However, video files of any bit rate can be cached to meet the video requests of different users. However, it should be noted that although videos generally have multiple versions with different bit rates. However, considering user QoE and its own network conditions, the current mainstream commercial streaming media system usually adaptively adjusts the transmitted video files to a level that matches the current network conditions. Thus, as shown in Fig. 3, the video caching strategy in this paper is:

Fig. 3
figure 3

Multi-rate video cache model

When a user k(=1, 2, …, K) requests video vs, if no edge server within k single-hop communication range caches a video copy of any bit rate of video vs, then k will directly use the lowest bit rate that the current network can afford. The remote server directly caches video vs. Conversely, user k will cache the highest bit rate version cached by edge servers within the single-hop communication range.

Video distribution strategy design

Under the edge server collaborative computing framework, user video requests can be obtained from a remote server or an edge server directly connected to users by a single-hop connection, and can also be obtained by other edge server transcoding operations. Figure 4 shows all possible distribution methods when a user requests a 360p version of a video file.

Fig. 4
figure 4

All possible multi-edge collaborative computing video caching and distribution models

Combining with Fig. 4, the following 8 binary variables are introduced to characterize the feasible video distribution scheme in multi-edge collaborative computing environment:

  1. 1)

    \( {a}_n^{sm}(t)=1 \) means that users directly obtain video vsm from the cache area of edge servers connected to it with a single hop (may be set to n, the same below); if not, \( {a}_n^{sm}(t)=0 \), as shown in Fig. 4(a);

  2. 2)

    \( {b}_n^{sm}(t)=1 \) means that video vsm requested by users is obtained by transcoding operation of the high-bit rate version by edge server n connected to it with a single hop; if not, then \( {b}_n^{sm}(t)=0 \), as shown in Fig. 4(b);

  3. 3)

    \( {c}_{n{n}^{\prime}}^{sm}(t)=1 \) means that video vsm requested by users is directly obtained from edge server n(n ≠ n); if not, then \( {c}_{n{n}^{\prime}}^{sm}(t)=0 \), as shown in Fig. 4(c);

  4. 4)

    \( {d}_{n{n}^{\prime}}^{sm}(t)=1 \) means that video vsm requested by users is obtained by transcoding from the higher version in edge server n(n ≠ n); if not, then \( {d}_{n{n}^{\prime}}^{sm}(t)=0 \), as shown in Fig. 4(d);

  5. 5)

    \( {e}_{n{n}^{\prime}}^{sm{m}^{\prime }}(t)=1 \) indicates that video vsm requested by users is first obtained by edge server n from edge server n(n ≠ n), and then obtained after performing a forwarding operation locally; if not, \( {e}_{n{n}^{\prime}}^{sm{m}^{\prime }}(t)=0 \), as shown in Fig. 4(e);

  6. 6)

    \( {f}_{n{n}^{\prime}}^{sm}(t)=1 \) indicates that video vsm requested by users is first obtained by edge server n(n ≠ n) from edge server n with a high bit rate version. Then edge server n(n ≠ n) performs the forwarding operation and returns it to edge server n; if not, \( {f}_{n{n}^{\prime}}^{sm}(t)=0 \), as shown in Fig. 4(f). It should be noted that there is a working mechanism of the two-hop cooperation of edge servers in this distribution mode;

  7. 7)

    \( {g}_{n{n}^{\prime }{n}^{{\prime\prime}}}^{sm{m}^{\prime }}(t)=1 \) indicates that video vsm requested by users is first obtained by edge server n from edge server n and then transcoded to edge server n; if not, then \( {g}_{n{n}^{\prime }{n}^{{\prime\prime}}}^{sm{m}^{\prime }}(t)=1 \), as shown in Fig. 4(g). It should be noted that in this distribution mode, there is also a working mechanism of the two-hop cooperation of edge servers;

  8. 8)

    hsm(t) = 1 means that video vsm requested by users is obtained from a remote server; if not, hsm(t) = 0 is shown in Fig. 4(h).

Problem modeling

Problem objective and QoE function

The problem of video caching and content distribution is modeled as a constrained optimization problem.

First, define the following cache strategy set:

$$ X=V\times R\times N=\left\{\left(v,r,n\right):v\in V,q\in Q,n\in N\right\} $$

The above set indicates that r bit rate version of video v is cached in edge server n. According to the adaptive bit rate caching strategy designed in section 2.1, the bit rate selected by user k can be expressed as

$$ {r}_{k,v}={r}_{{\left[\mathrm{argmax}r\in R\left\{\exists n\in N,\left(v,q,n\right)\in X\right\}\right]}^{1+}} $$

where [x]1+ = x is only if x ≥ 1, otherwise [x]1+ = 1.

Further, the delay in distributing video content is discussed. Let τn0 and \( {\tau}_{n{n}^{\prime }} \) be the unit delay when edge server n obtains video from remote servers and edge server n respectively. Generally, \( {\tau}_{n0}\gg {\tau}_{n{n}^{\prime }} \). It should be noted that because of the limited network bandwidth between edge servers, the network topology between \( {\tau}_{n{n}^{\prime }} \) and edge servers is highly correlated. Let τT be the delay of video forwarding operations. The delays under different distribution strategies are given below.

  1. 1)

    For the distribution mode shown in Fig. 4(a), the content access delay is \( {\tau}_1^n=0 \), that is, it can be directly transmitted to users without buffering from other servers;

  2. 2)

    For the distribution mode shown in Fig. 4(b), the content access delay is

$$ {\tau}_2^n={\tau}_T{b}_n^{sm}(t) $$
  1. 3)

    For the distribution mode shown in Fig. 4(c), the content access delay is

$$ {\tau}_3^n={\sum}_{n\ne {n}^{\prime }}{r}_{sm}{l}_s{\tau}_{n{n}^{\prime }}{c}_{n{n}^{\prime}}^{sm}(t) $$
  1. 4)

    For the distribution mode shown in Fig. 4(d), the content access delay is

$$ {\tau}_4^n={\sum}_{n\ne {n}^{\prime }}\left({r}_{sm}{l}_s{\tau}_{n{n}^{\prime }}+{\tau}_T\right){d}_{n{n}^{\prime}}^{sm}(t) $$
  1. 5)

    For the distribution mode shown in Fig. 4(e), the content access delay is

$$ {\tau}_5^n={\sum}_{n\ne {n}^{\prime }}{\sum}_{m^{\prime }=1}^{m-1}\left({r}_{s{m}^{\prime }}{l}_s{\tau}_{n{n}^{\prime }}+{\tau}_T\right){e}_{n{n}^{\prime}}^{sm{m}^{\prime }}(t) $$

where m > m (the same below);

  1. 6)

    For the distribution mode shown in Fig. 4(f), the content access delay is

$$ {\tau}_6^n={\sum}_{n\ne {n}^{\prime }}{\sum}_{m^{\prime }=1}^{m-1}\left({r}_{sm}{l}_s{\tau}_{n{n}^{\prime }}+{r}_{s{m}^{\prime }}{l}_s{\tau}_{n{n}^{\prime }}+{\tau}_T\right){f}_{n{n}^{\prime}}^{sm}(t) $$
  1. 7)

    For the distribution mode shown in Fig. 4(g), the content access delay is

$$ {\tau}_7^n={\sum}_{n\ne {n}^{\prime }}{\sum}_{n^{{\prime\prime}}\ne {n}^{\prime}\ne n}{\sum}_{m^{\prime }=1}^{m-1}\left({r}_{sm}{l}_s{\tau}_{n{n}^{{\prime\prime} }}+{r}_{s{m}^{\prime }}{l}_s{\tau}_{n^{{\prime\prime} }{n}^{\prime }}+{\tau}_T\right){g}_{n{n}^{\prime }{n}^{{\prime\prime}}}^{sm{m}^{\prime }}(t) $$
  1. 8)

    For the distribution mode shown in Fig. 4(h), the content access delay is

$$ {\tau}_8^n={r}_{sm}{l}_s{\tau}_{n0}{h}^{sm}(t) $$

Therefore, for a certain bit rate video, the content access delay is

$$ {\tau}_{\varSigma}^n=\sum \limits_{i=1}^8{\tau}_i^n $$

Moreover, since the video popularity follows Zipf distribution [31], that is, the video service requested by users often has a positive correlation with popularity. Therefore, edge servers often choose more popular videos for caching in order to shorten the time delay when acquiring from remote servers. Thus, the following QoE function is defined in this paper

$$ {J}_n={\left({\tau}_{\varSigma}^n\right)}^{\gamma_s} $$

where γs is the popularity of video vs.


Considering the existence of “cache before transmission”, “transcoding high bit rate version to low bit rate version” principles of videos, the limited computing and storage resources and the uniqueness of distribution strategies, video cache and content distribution constraints in the actual process can be divided into the following five categories.

  1. 1)

    Execution order constraints

All video requests from any user first need to verify whether the video file is cached on edge servers. Thus, the binary decision variable \( {\delta}_n^{sm}=1 \) defined as follows indicates that there are m bit rate copies of s video files on edge server n, otherwise it is 0. Therefore, there is

$$ {a}_n^{sm}(t)\le {\delta}_n^{sm} $$
$$ {c}_{n{n}^{\prime}}^{sm}(t)\le {\delta}_{n^{\prime}}^{sm} $$
$$ {e}_{n{n}^{\prime}}^{sm{m}^{\prime }}(t)\le {\delta}_{n^{\prime}}^{s{m}^{\prime }} $$
$$ {f}_{n{n}^{\prime}}^{sm{m}^{\prime }}(t)\le {\delta}_n^{s{m}^{\prime }} $$
$$ {g}_{n{n}^{\prime }{n}^{{\prime\prime}}}^{sm{m}^{\prime }}(t)\le {\delta}_{n^{\prime}}^{s{m}^{\prime }} $$
  1. 2)

    Transcoding order constraints

For distribution modes 2 and 4 involving transcoding operations, there is a unidirectional operation for transcoding from a high bit rate version to a low bit rate version. So there is

$$ {b}_n^{sm}(t)\le \min \left\{1,{\sum}_{m^{\prime }=1}^{m-1}{\delta}_n^{s{m}^{\prime }}\right\} $$
$$ {d}_{n{n}^{\prime}}^{sm}(t)\le \min \left\{1,{\sum}_{m^{\prime }=1}^{m-1}{\delta}_{n^{\prime}}^{s{m}^{\prime }}\right\} $$
  1. 3)

    Edge server storage capacity constraints

For edge server n, the size of video files it stores will not exceed its storage limit, that is

$$ {\sum}_{n\in N}{\sum}_{m\in M}{\delta}_n^{sm}{r}_{sm}{l}_s\le {C}_n $$

where Cn is the storage capacity of edge server n.

  1. 4)

    Edge server computing capacity constraints.

Under the multi-edge server collaboration framework, it is considered that each edge server not only needs to process the video transcoding operations of users connected to its single hop, but also assists other edge servers in transcoding operations. And all video transcoding operations should not exceed the maximum capacity they can handle. Let the number of requests for video vsm from edge server n be \( {N}_n^{sm}(t) \). βn is the unit bit transcoding time of edge server n, and \( {T}_n^{\mathrm{max}} \) is the maximum computing delay of edge server n. So there are:

$$ {\beta}_n\left(\begin{array}{l}{\sum}_{n\in N}{\sum}_{m\in M}{N}_n^{sm}(t){b}_n^{sm}(t)+{\sum}_{n^{\prime}\ne n}{\sum}_{m^{\prime }=1}^{m-1}{e}_{n{n}^{\prime}}^{sm{m}^{\prime }}(t)\\ {}+{\sum}_{n^{\prime}\ne n}{\sum}_{n\in N}{\sum}_{m\in M}{N}_{n^{\prime}}^{s^{\prime }m}(t){d}_{n{n}^{\prime}}^{sm}(t)+{\sum}_{m^{\prime }=1}^{m-1}\left({f}_{n{n}^{\prime}}^{sm{m}^{\prime }}(t)+{\sum}_{n^{{\prime\prime}}\ne {n}^{\prime}\ne n}{g}_{n{n}^{\prime }{n}^{{\prime\prime}}}^{sm{m}^{\prime }}\right)\end{array}\right)\le {T}_n^{\mathrm{max}} $$
  1. 5)

    Unique constraint of distribution strategy

When any user’s video request arrives, the control center can only give a distribution strategy to respond to the user’s request, namely

$$ {a}_n^{sm}(t)+{b}_n^{sm}(t)+\sum \limits_{n^{\prime}\ne n}\left(\begin{array}{l}{c}_{n{n}^{\prime}}^{sm}(t)+{d}_{nn\hbox{'}}^{sm}(t)+\\ {}{\sum}_{m\hbox{'}=1}^{m-1}\left({e}_{nn\hbox{'}}^{sm m\hbox{'}}(t)+{f}_{nn\hbox{'}}^{sm m\hbox{'}}(t)+{\sum}_{n^{{\prime\prime}}\ne {n}^{\prime}\ne n}{g}_{nn\hbox{'}n\hbox{'}\hbox{'}}^{sm m\hbox{'}}\right)\end{array}\right)+{h}^{sm}(t)=1 $$

Thus, the multi-edge collaborative computing video caching and content distribution strategy mentioned in this paper can be modeled as the following integer optimization model with constraints:

$$ {\displaystyle \begin{array}{c}\left(\mathrm{Objective}\right):\min {J}_n\\ {}\mathrm{Constraints}:\mathrm{formula}\ (25)-(34)\end{array}} $$

Optimization problem solving

Problem analysis

Integer optimization is an NP-hard problem [32], and so far no effective general method has appeared. Often, heuristic search algorithms [33, 34] and branch and bound methods [35] are needed to approximate the global optimal solution. Besides, the model built in Section 3.3 is more accurately called a random integer optimization problem. This is because the exact number of user video requests, \( {N}_n^{sm}(t) \), cannot be obtained in advance. However, on a longer time scale, the number \( {N}_n^{sm}(t) \) of user requests exhibits a self-similar characteristic. Therefore, this paper proposes the following strategies:

  1. 1)

    First, the number \( \left\{{N}_n^{sm}(t)\right\}\left(m=1,2,\dots, m\right) \) of historical user requests constitutes a set, and each element \( {N}_n^{sm}(t) \) in the set satisfies:

$$ \min \_{N}_n^{sm}(t)\le {N}_n^{sm}(t)\le \max \_{N}_n^{sm}(t) $$

Changes in the number of video requests from a small range of users will not have a significant impact on QoE of all users. Thus, without loss of generality, the number of running scenarios of edge servers can be set as:

$$ Nu{m}_n=\left\lceil \frac{\max \_{N}_n^{sm}(t)-\min \_{N}_n^{sm}(t)}{\lambda_n}\right\rceil $$

where is rounded up and λn is the division interval. Therefore, the running scene can be numbered 1, 2, …, Numn according to the number \( {N}_n^{sm}(t) \) of user video requests in ascending order. Therefore, for N edge servers, the number of possible running scenarios is \( \prod \limits_{n\in N} Num{}_n \). It should be pointed out that the division operation can effectively reduce the number of running scenarios, thereby reducing the difficulty of solving optimization model.

  1. 2)

    For each running scenario, \( {N}_n^{sm}(t) \) is taken as the upper bound of current interval. Therefore, the random integer optimization model in Section 3.3 degenerates into the classic integer programming model. However, considering that there are too many decision variables involved in this problem and there is a competitive relationship (such as the consumption of computing resources by local decoding and collaborative decoding), it may take too long to solve using classic solvers such as CPLEX. Thus, this paper further adopts pyramid evolution intelligent evolution algorithm based on optimal cooperation to solve this problem.

According to the above two steps, the best video caching and content distribution strategy in different scenarios can be formed and stored in the control center in an offline form. In actual application, the best caching and distribution strategy can be selected in the form of table look-up only by the number of current user video requests. It should be pointed out that the advantage of this scheme is that it avoids extra calculation time overhead introduced when solving the online optimal strategy. The problem complexity of table lookup method is O(n), and it still has high computational efficiency even when the number of running scenarios is large.

Pyramid group intelligent evolutionary solution algorithm based on optimal cooperation

The block diagram of group intelligent evolutionary solution algorithm based on pyramid structure is shown in Fig. 5. The global optimal solution is mainly composed of the following formulas:

$$ \left[{F}^{\prime },I\right]= sort(F) $$
$$ X=\left[{X}_4,{X}_3,{X}_2,{X}_1\right] $$
$$ {\sum}_{p=1}^4 length\left({X}_p\right)= length(X) $$
$$ length\left({X}_p\right)< length\left({X}_{p+1}\right),i=1,2,3 $$
$$ {x}_p^{q+1}={x}_p^q+{R}_p^q\times \sigma $$
$$ {R}_p^q\le {R}_{p+1}^q,P=1,2,3 $$
$$ {R}_p^{q+1}={R}_p^0\times {\mu}^q $$
$$ x={x}_p^{q+1}+\theta \left({x}_p^{q+1}-{x}_p^q\right) $$
Fig. 5
figure 5

Pyramid structure framework

Equation (26) sorts in ascending order according to the size of fitness value F of population X to obtain the sorted fitness value F and index I. The sorted group X1 is divided into four parts according to formula (38) and formula (39). The number of individuals in each part satisfies Eq. (40), that is, the number of groups X1 composed of excellent individuals is the smallest, and the number of groups X4 composed of the worst individuals is the largest. Group X1 is called mining layer, X2 and X3 are called transfer layers, and X4 is called exploration layer. The individual \( {x}_p^q \) of each layer updates the group according to different search neighborhoods \( {R}_p^q \) according to Eq. (41) in each layer. The neighborhood size of each layer satisfies formula (42), that is, the mining layer generates a search step according to random number σ between [− 1,1] in the smaller neighborhood. The point is to complete the mining work of population, and the exploration layer is to mine potential outstanding individuals in a large neighborhood. As the number of iterations q progresses, the population of each generation gradually approaches the global optimal solution. Therefore, the search radius \( {R}_p^q \) of each layer is also adaptively updated with contraction factor 0 < μ < 1 according to Eq. (43), thereby improving the optimization efficiency. After generating new individuals in each layer, the algorithm collaborates between layers, that is, the excellent individuals in exploration layer and the excellent individuals in transfer layer are transferred to mining layer and exploration layer respectively. The transferred individuals are cultivated in receiving layer, and these individuals are accelerated according to formula (44) with the acceleration step θ along the direction for generation of new individuals to obtain accelerated individual x.

The standard PES algorithm includes two kinds of collaboration, one is layer-to-layer group collaboration, which strengthens the communication between populations. One is the collaboration between individuals in the layer and parent individuals. Although parent individuals have a certain role in guiding the generation of new individuals in the offspring, it is not very helpful in producing excellent individuals. This will cause the convergence speed of algorithm to be slow and affect the operating efficiency of algorithm application. Particle swarm algorithm completes the updating of population by the individual extreme value and global extreme value, which has the advantage of fast convergence speed [36]. This update rule just makes up for the lack of cooperation among individuals in pyramid structure intelligent evolution algorithm. This paper integrates this idea into the standard pyramid structure intelligent evolution algorithm. By selecting the cooperation with the optimal individual of population in the current generation layer and optimal individual of the entire population to complete the update for each layer of individuals. A pyramid-based intelligent evolution algorithm based on the optimal cooperation strategy is proposed, its update rules for each layer of individuals are as follows:

$$ {x}_p^{q+1}={x}_p^q+\mathit{\operatorname{rand}}\times {R}_p^q\times \left[w\left( pBes{t}_p^q-{x}_p^q\right)+\left(1-w\right)\left( gBes{t}^q-{x}_p^q\right)\right] $$

where \( {x}_p^{q+1} \) is the new individual produced by the p layer of the q generation, and \( {x}_p^q \) is the parent individual of the p layer. rand refers to a random number between [0,1], \( {R}_p^q \) is the search radius of current generation. \( pBes{t}_p^q \) is the optimal individual of the p layer of the q generation, and gBestq is the optimal individual of entire population for the q generation. 0 < w < 1 reflects the size of the new individual’s bias toward individual’s \( pBes{t}_p^q \) direction, and 1 − w reflects the degree of bias toward gBestq direction.

Based on the strategy of optimal cooperation, the parent individual can be searched and explored along the direction of joint force generated by the individual extreme value and global extreme value. This not only strengthens the mutual cooperation among individuals in the layer, but also establishes the connection between individuals in each layer and the globally optimal individual. Under the strategy of layer-to-layer collaboration and optimal cooperation among individuals, the convergence speed of pyramid structure intelligent evolution algorithm is accelerated, and the optimization efficiency of this algorithm is improved.

Therefore, in this paper, the steps of applying pyramid structure intelligent evolution algorithm to solve the optimal strategy for video caching and content distribution are as follows.

  • Step 1: Initialize parameters, set the maximum number of iterations Imax and population size G, and initial search radius \( {R}_p^q \) of each layer;

  • Step 2: Randomly generate the initial population {x0} of population size G and juxtapose the number of iterations t = 1;

  • Step 3: Calculate the fitness value of population {xq} (i.e. the current QoE function). According to the size of fitness value J(xq), the population is divided into four sub-populations Xq(q = 1, 2, 3, 4), and the contemporary individual extremum \( pBes{t}_p^q \) and global extremum \( pBes{t}_p^q \) are recorded;

  • Step 4: According to Eq. (34), generate a new individual \( {x}_p^{q+1} \) for individual \( {x}_p^q \) of the q(q = 1, 2, 3, 4) layer. And select the corresponding number of individuals from each layer to pass to the upper layer, and complete the cultivation according to formula (34). Integrate the generated new individuals, passing individuals and parent individuals, and select individuals with the corresponding number of populations as updated group \( \left\{{x}_p^{q+1}\right\} \) of this layer.

  • Step 5: Integrate updated population \( \left\{{x}_p^q\right\} \) of each layer into new population \( \left\{{x}_p^{q+1}\right\} \) generated by the q generation.

  • Step 6: Determine the termination iteration condition. When the number of iterations reaches the maximum number of iterations t = Imax, then output gBestq and algorithm stops. Otherwise, set t = t + 1 and update the search radius \( {R}_p^q \) of each layer.

Experimental results and discussion

Experimental setup

On the hardware configuration of i5-3230M CPU, 10GB RAM, 1TBHD + 512GB SSD, we use MATLAB R2016a version and Opnet network simulation software to simulate and verify the performance of proposed algorithm. The parameters used by Logistic chaotic-QCA encryption algorithm are: ω01 = ω03 = 0.28, ω02 = 0.7, ω04 = 0.5, Ψ = 3.9. The video file type is 20, and the duration is evenly distributed within 5 min–45 min. Each video contains 4 versions with bit rates of 1.25, 1, 0.75 and 0.5 times the original rate (2Mbps). Without loss of generality, let the user arrival rate under each edge server follow the Poisson distribution of 50/min, while the video popularity follows Zipf distribution with parameters [0.3, 0.9]. And we suppose that each version of the same video has the same request probability. Also set τn0 = 100ms, \( {\tau}_{n{n}^{\prime }} \) is evenly distributed in [5, 50] ms interval, the number of edge servers is 10. The unit bit transcoding time βn reference value is uniformly set to 2 μs, and the edge server cache capacity α reference value is set to 50. The maximum calculation delay \( {T}_n^{\mathrm{max}} \) allowed by servers is set to 150 ms.

In addition, comparison algorithms are selected as the JCPNonCo scheme proposed in reference [18], the CCNonP scheme proposed in reference [27] and the APCP-OptRs scheme proposed in reference [28].

Parameter performance analysis

Transcoding capability of edge servers

The transcoding capability of edge server βn is closely related to the CPU performance of edge server itself. In other words, βn reflects the performance of edge servers to a large extent. Besides, the smaller βn is, the higher CPU performance is, and the stronger the transcoding capability for videos is. Figure 6 shows the change relationship of QoE function when the unit transcoding time βn changes from 0.5 μs in steps of 0.5 μs to 5 μs. The overall change trend is that the value of QoE function gradually increases with the increase of βn. However, when βn changes from 0.5 μs to 3 μs, the QoE change range is relatively slow, after βn exceeds 3 μs, the QoE function value rises sharply. The reason is that when βn is smaller, the transcoding capability of each server is stronger. Therefore, there are more options for video caching and content distribution strategies. When βn is large, due to the long processing delay of a single server, it is no longer the optimal option to provide coordinated transcoding capabilities to other edge servers. Edge servers tend to cache videos at all bit rates to meet users’ video experience. However, it should be noted that lower transcoding time means higher performance requirements of CPU, which will cause problems of rising economic costs.

Fig. 6
figure 6

QoE function changes with the unit bit transcoding time βn

Cache capacity of edge server

Analyzing the impact of edge server cache capacity (that is, α in Eq. (2)) on QoE performance, Fig. 7 shows the change in the value of QoE function when α changes from 10 to 50 and the step size is 5. When α changes from 10 to 35, QoE gradually decreases as the value of α changes. However, when the value of α exceeds 35, the QoE performance no longer changes significantly. Further define the storage hit ratio as the ratio of sum for video types requested by users to the sum of video types locally cached by edge servers. Figure 8 shows the relationship between the storage hit ratio and cache capacity α. It can be seen that as the cache capacity of edge servers gradually increases, the storage hit rate gradually increases. When α exceeds 25, the storage hit rate increases to 100% and does not change with the increase of cache capacity. This is because when the cache capacity of edge servers is large, more resource copies can be stored on local edge servers. It no need to request resources from the cloud or adjacent edge servers or rely on other edge server nodes to assist in transcoding. Thus, the numerical results show that increasing the cache capacity of edge servers can only improve the QoE of users to a limited extent, and at the same time will bring about the problem of economic cost.

Fig. 7
figure 7

Relationship between QoE function and edge server cache capacity α

Fig. 8
figure 8

Relationship between storage hit rate and cache capacity α

QoE comparison of different strategies

Set Zipf distribution parameter γs = 0.75. Figure 9 shows the value of QoE under different algorithms when the number of edge servers changes from 1 to 15.

Fig. 9
figure 9

QoE function changes with the number of edge servers

As can be seen from Fig. 9, when the number of edge servers is only 1, there is almost no difference in the QoE values of all algorithms. This is because there is no edge server collaboration mechanism at this time, and all video caching and transcoding operations are performed on the same server. With the increase in the number of edge servers, the collaboration mechanism among multiple edge servers is not considered in reference [18]. That is, each edge server only exchanges data with a remote server and does not cooperate with other edge servers, so QoE value is maintained at about 260. References [27, 28] and the algorithm proposed in this paper all have a cooperative relationship due to their existence. Therefore, the QoE value gradually decreases as the number of edge servers increases. However, compared with the two-hop mode involving three edge servers in this paper, the collaborative transcoding service is not provided between edge servers in reference [27] and only edge servers in video cache is considered in reference [28]. Single-hop mode with transcoding only between two edge servers. Thus, the proposed algorithm provides more feasible solutions for caching and content distribution. Correspondingly, the decrease of QoE value is improved by 22.84% and 14.40% compared with the algorithms proposed in reference [27] and reference [28].

Suppose the number of edge servers is 10, and Fig. 10 shows QoE function value when the Zipf distribution parameter γs changes. No loss of generality, the Zipf distribution parameter changes from 0.3 to 0.9 in steps of 0.1. Regardless of this method, QoE function shows a similar trend. That is, as the distribution parameters change, QoE function is decreasing. This shows that caching popular videos on the edge server can significantly improve users’ video request satisfaction. However, when γs changing from 0.3 to 0.9, the algorithm proposed in this paper has a decrease rate of 35.22%, 16.40% and 2.16% compared with references [18, 27, 28]. This is because this paper additionally considers a two-hop collaboration scenario among three edge servers when designing a video content distribution strategy, thereby providing additional strategy options.

Fig. 10
figure 10

QoE function changes with Zipf parameter

Comparison of average access delay of different strategies

Further, the relevant algorithm performance test is conducted in Opnet environment. Figure 11 shows the average access delay of users under each edge server under different video caching and content distribution strategies when edge server is 10.

Fig. 11
figure 11

Average access delay of users

It can be seen from Fig. 11 that the algorithm proposed in reference [18] does not consider collaboration between edge servers, so the average access delay of users remains unchanged and is the largest of the four methods. References [27, 28] and our proposed algorithm all have a cooperative relationship, so the average user access delay is shorter than the method described in reference [18]. The proposed algorithm further considers the video caching and transcoding operations in a three-hop cooperative manner among three edge servers, which is essentially the expansion and derivation of references [27, 28]. This provides additional decision-making options for control center, thereby making the multi-edge server work in a more efficient state. Therefore, the access delay of users is shortened by 45.21%, 24.66% and 14.06% respectively, compared with references [18, 27, 28], and the average shortening is 27.98%.


This paper proposes a video caching and content distribution mechanism in a multi-edge collaborative computing environment. Based on the definition of user QoE, the video caching and content distribution mechanism is modeled as a random integer programming problem, and is solved using an intelligent evolution algorithm based on optimal cooperation pyramid structure. Experimental examples show that our proposed algorithm enriches the decision-making options for video caching and content distribution strategies due to considering two-hop collaboration scenarios among three edge services. Besides, it also guarantees that edge servers have high computational efficiency and objective storage space, so that it has a better engineering application in practical background.

The established model is offline working mode, thus we will further study the real-time online multi-edge server collaboration video caching and content distribution strategy in the future.

Availability of data and materials

All the data and materials in this article are available.


  1. Bairagi AK, Abedin SF, Tran NH et al (2018) QoE-enabled unlicensed spectrum sharing in 5G: a game-theoretic approach[J]. IEEE Access 6:50538–50554

    Article  Google Scholar 

  2. Schwarzmann S , Marquezan C C , Bosk M , et al. Estimating video streaming QoE in the 5G architecture using machine learning[C]// the 4th internet-QoE workshop. 2019

    Google Scholar 

  3. Segura-Garcia J, Felici-Castell S, Garcia-Pineda M (2018) Performance evaluation of different techniques to estimate subjective quality in live video streaming applications over LTE-advance mobile networks[J]. J Netw Comput Appl 107(1):22–37

    Article  Google Scholar 

  4. Qi L, Zhang X, Dou W, Ni Q (2017) A distributed locality-sensitive hashing based approach for cloud service recommendation from multi-source data. IEEE J Selected Areas Commun 35(11):2616–2624

    Article  Google Scholar 

  5. Gao G, Zhang H, Hu H et al (2018) Optimizing quality of experience for adaptive bitrate streaming via viewer interest inference[J]. IEEE Trans Multimed 20(12):3399–3413

    Article  Google Scholar 

  6. Deng KY, Yuan L, Wan Y et al (2018) Optimized cross-layer transmission for scalable video over DVB-H networks[J]. Signal Process Image Commun 63(9):81–91

    Article  Google Scholar 

  7. Qi L, Dou W, Wang W, Li G, Yu H, Wan S (2018) Dynamic Mobile crowdsourcing selection for electricity load forecasting. IEEE Access 6:46926–46937

    Article  Google Scholar 

  8. Qi L, Dou W, Hu C, Zhou Y, Yu J (2015) A context-aware service evaluation approach over big data for cloud applications. IEEE Trans Cloud Comput.

  9. Zhu H (2019) A simplified deniable authentication scheme in cloud-based pay-TV system with privacy protection[J]. Int J Commun Syst 32(11):3967–3979

    Article  Google Scholar 

  10. Yan H, Li X, Wang Y et al (2018) Centralized duplicate removal video storage system with privacy preservation in IoT[J]. Sensors 18(6):1814–1826

    Article  Google Scholar 

  11. Sugathapala I, Hanif MF, Lorenzo B et al (2018) Topology adaptive sum rate maximization in the downlink of dynamic wireless networks[J]. Commun IEEE Trans 66(8):3501–3516

    Article  Google Scholar 

  12. Dijiang H, Huijun W (2018) Edge clouds-pushing the boundary of Mobile clouds[J]. Mobile Cloud Computin[M]. Elsevier:153–176

  13. Aral A, Ovatman T (2018) A decentralized replica placement algorithm for edge computing[J]. IEEE Trans Netw Serv Manag 17(2):516–529

    Article  Google Scholar 

  14. Ma X, Lin C, Zhang H et al (2018) Energy-aware computation offloading of IoT sensors in cloudlet-based mobile edge computing[J]. Sensors 18(6):1945–1953

    Article  Google Scholar 

  15. Psaras I, Saino L, Pavlou G (2014) Revisiting resource pooling: the case for in-network resource sharing[C]// HotNets-XIII: 13th ACM workshop on hot topics in networks, 27–28 October 2014. ACM, Los Angeles

    Google Scholar 

  16. Sung-Yen C, Chin-Feng L, Yueh-Min H (2012) Dynamic adjustable multimedia streaming service architecture over cloud computing[J]. Comput Commun 35(15):1798–1808

    Article  Google Scholar 

  17. Gao G, Wen Y, Cai J (2017) vCache: supporting cost-efficient adaptive bitrate streaming[J]. IEEE Multimed 24(3):19–27

    Article  Google Scholar 

  18. Pedersen H, Dey S (2016) Enhancing mobile video capacity and quality using rate adaptation, RAN caching and processing[J]. IEEE/ACM Trans Networking 24(2):996–1010

    Article  Google Scholar 

  19. Zhang W , Wen Y , Chen Z , et al. QoE-driven cache management for HTTP adaptive bit rate (ABR) streaming over wireless networks[C]// global communications conference. IEEE, 2013

    Google Scholar 

  20. Pääkkönen P, Heikkinen A, Aihkisalo T (2019) Online architecture for predicting live video transcoding resources[J]. J Cloud Comput 8(9):1–24

    Google Scholar 

  21. Ivan S, Mirko S, Lea SK (2018) Game categorization for deriving QoE-driven video encoding configuration strategies for cloud gaming[J]. ACM Trans Multimed Comput Commun Appl 14(3):1–24

    Google Scholar 

  22. Ananthanarayanan G, Bahl P, Bodík P et al (2017) Real-time video analytics: the killer app for edge computing[J]. Computer 50(10):58–67

    Article  Google Scholar 

  23. Long C, Cao Y, Jiang T et al (2017) Edge computing framework for cooperative video processing in multimedia IoT systems[J]. IEEE Trans Multimed 20(5):1126–1139

    Article  Google Scholar 

  24. Shuping P, Oscar FJ, Khodashenas PS et al (2017) QoE-oriented mobile edge service management leveraging SDN and NFV[J]. Mob Inf Syst 2017:1–14

  25. Shun-Ren Y, Yu-Ju T, Chen-Chia H et al (2019) Multi-access edge computing enhanced video streaming: proof-of-concept implementation and prediction/QoE models[J]. IEEE Trans Veh Technol 68(2):1888–1902

    Article  Google Scholar 

  26. Desheng W, Yanrong P, Xiaoqiang M et al (2018) Adaptive wireless video streaming based on edge computing: opportunities and approaches[J]. IEEE Trans Serv Comput 12(5):685–697

    Google Scholar 

  27. Li C, Toni L, Zou J et al (2018) QoE-driven mobile edge caching placement for adaptive video streaming[J]. IEEE Trans Multimed 20(4):965–984

    Article  Google Scholar 

  28. Tran TX, Pompili D (2019) Adaptive bitrate video caching and processing in mobile-edge computing networks[J]. IEEE Trans Mob Comput 18(9):1965–1978

    Article  Google Scholar 

  29. Das B, Paul AK, De D (2019) An unconventional arithmetic logic unit design and computing in actin quantum cellular automata. Microsyst Technol.

  30. Pavlic J, Burkeljca J (2019) FFmpeg based coding efficiency comparison of H.264/AVC, H.265/HEVC and VP9 video coding standards for video hosting websites[J]. Int J Comput Appl 182(37):1–8

    Google Scholar 

  31. Osman AM, Osman NI (2018) A comparison of cache replacement algorithms for video services[J]. Int J Comput Sci Inf Technol 10(2):95–111

    MathSciNet  Google Scholar 

  32. Zhou YF, Yu HX, Li Z, Su JF, Liu CS (2020) Robust optimization of a distribution network location-routing problem under carbon trading policies [J]. IEEE Access 8(1):46288–46306

    Article  Google Scholar 

  33. Yufeng Z, Na C (2019) The LAP under facility disruptions during early post-earthquake rescue using PSO-GA hybrid algorithm[J]. Fresenius Environ Bull 28(12):9906–9914

    Google Scholar 

  34. Jiafu S, Yu Y, Tao Y (2018) Measuring knowledge diffusion efficiency in R&D networks[J]. Knowl Manag Res Pract 16(2):1–12

    Article  Google Scholar 

  35. Jian J, Guo Y, Jiang L et al (2019) A multi-objective optimization model for green supply chain considering environmental benefits[J]. Sustainability 11(21):5911–5931

    Article  Google Scholar 

  36. Sierra MR, Coello Coello AC (2005) Improving PSO-based multi-objective optimization using crowding, mutation and E-dominance[J]. Lect Notes Comput Sci 3410:505–519

    Article  MATH  Google Scholar 

Download references


The authors thank the editor and anonymous reviewers for their helpful comments and valuable suggestions. I would like to acknowledge all our team members.

About the authors

Zhi Liu, Post-Graduate, Title of Network Engineer. At present, He is mainly responsible for the information construction of continuing education in Hunan Agricultural University. His research interests include complex network and intelligent information processing.

Bo Qiao, Ph.D. of Agriculture Science, Lecturer. Graduated from Hunan Agricultural University in 2019. Worked in Hunan Agricultural University. His research interests include agricultural knowledge graph and natural language processing.

Fang Kui, Ph.D. of Computer Science, Professor. Graduated from National University of Defense Technology in 2000. Worked in Hunan Agricultural University. His research interests include Graphic image processing and agricultural information engineering.


This work was supported by the Natural Science Foundation of Hunan Province, China (No.2019JJ40133).

Author information

Authors and Affiliations



The main idea and experimental guidance of this article is done by Kui Fang. The program design, coding and article writing are mainly completed by Zhi Liu and Bo Qiao. The author(s) read and approved the final manuscript.

Corresponding author

Correspondence to Kui Fang.

Ethics declarations

Competing interests

The authors of this article declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, Z., Qiao, B. & Fang, K. Joint optimization strategy for QoE-aware encrypted video caching and content distributing in multi-edge collaborative computing environment. J Cloud Comp 9, 56 (2020).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: