Joint optimization strategy for QoE-aware encrypted video caching and content distributing in multi-edge collaborative computing environment

The video request service of users in 5G network will explode, and adaptive bit rate technology can provide users with reliable video response. Placing video resources on edge servers close to users can overcome the problem of excessive network load similar to traditional centralized cloud platform solutions. Moreover, multiple edge servers can provide caching and transcoding support by collaboration mechanisms, which further improves users’ Quality of Experience (QoE). However, the design difficulty of video caching and content distribution strategies is increased due to the diversity of collaboration mechanisms and the competition between local and collaborative services of edge servers for computing and storage resources. In order to solve this problem, video cache and content distribution problem is modeled as random integer programming problem in the multi-edge server at most two-hop collaboration scenario. In order to improve the security of video data transmission, the video stream is encrypted using an encryption algorithm based on Logistic chaotic-Quantum-dot Cellular Automata (QCA). For improving the efficiency of solving integer programming problems, this paper uses a pyramid intelligent evolution algorithm based on optimal cooperation strategy to solve this problem. Simulation experiments show that our proposed method can obtain higher QoE value compared with several newer methods. In addition, the average access delay of proposed method is shortened by more than 27.98%, which verifies its reliability.


Introduction
In 5G networks, video services will become the mainstream, and the contradiction between explosive growth of data volume and QoE is becoming increasingly prominent [1]. In other words, in a 5G network, when users initiate a request for a certain bit-rate video resource, remote storage devices must respond to users by codec operations within the shortest possible time [2]. Therefore, research on QoE-aware video caching and content distribution technology has attracted extensive attentions from academia and industry.
Due to differences in user own hardware processing capabilities, network channel conditions, etc., different users usually request video files of different quality from remote video storage devices [3]. According to behavioral characteristics of users, Adaptive Bit Rate (ABR) technology is widely used in video services to improve QoE of users [4,5].
There are obvious differences in user own hardware processing capabilities and network channel conditions, the energy carried by user terminals such as mobile phones and tablet computers is often limited. Therefore, it is generally not used to superimpose and decode multiple coding layers on the user side to obtain the bit rate video required by users, such as Scalable Video Coding (SVC) technology [6]. Conversely, for same video files, the more common implementation of ABR is that the remote end is first encoded as a video version with different formats and resolutions. Then, according to users' request and network conditions, a variant file is selectively send to users [7,8]. This method uses "storage/ compute-transmit" mode, and its advantage is that the decoding overhead of mobile terminals is avoided, thereby saving corresponding energy consumption. However, the corresponding disadvantage is that when the number of users is very large and video request operations are frequent, the video traffic in network will show an explosive growth. This will cause the transformation time of video files to be too long and QoE to drop.
Further, the available technical routes can be divided into centralized cloud computing solutions and decentralized edge collaborative computing solutions in the ABR technology adopting "storage/computing-transmission" mode. In centralized cloud computing solutions, video storage and codec operations are all implemented on the remote cloud platform. Users need to obtain corresponding video resources from the far ends [9,10]. In decentralized edge collaborative computing solutions, video files with different bit rates and formats are stored on edge computing devices closer to users. In addition, different edge computing devices can implement ABR functions by cooperative codec and resource interaction [11,12]. Compared with centralized cloud computing solutions, decentralized edge computing solutions can effectively reduce video traffic overhead in the network. Since edge computing devices are closer to the user side, the QoE of users is correspondingly higher [13]. However, it should be pointed out that the computing resources and storage resources of edge computing devices are often limited. A single edge computing device cannot satisfy the explosive growth of video requests in 5G networks [14]. However, the optional response methods of edge computing devices are very diverse when faced with the same video request. For example, local nodes cache and directly transmit, neighbor nodes cache and transmit, neighbor nodes decode and then cache and transmit, and neighbor nodes cache and decode and transmits, which undoubtedly increases the difficulty of designing video caching and content distribution mechanisms. This paper mainly design a video cache and content distribution optimization strategy that is QoE-aware in a multi-edge collaborative computing environment.

Related works
Considering the hardware resources and energy consumption requirements of user terminals, traditional SVC coding technology [6] is the representative, and passive ABR technology that requires users to perform the decoding operation is no longer suitable for video services in 5G networks. On the contrary, another kind of resource caching and encoding/decoding operation performed by remote resource storage devices has been welcomed by academia and industry. The basic principle is that, using the "storage/computing-transmission" mode, remote resource storage devices adaptively provide video streams corresponding to the bit rate according to user requests and network conditions [15,16]. There are two main technical solutions available, namely centralized cloud computing solutions and decentralized edge collaborative computing solutions in the "storage/ computing-transmission" mode.
Centralized cloud computing solution: All video files are stored on a remote cloud platform. Users obtain video resources with a specific bit rate by wide area networks. Many references discuss the design of video caching and content distribution mechanisms under the cloud computing framework. For example, under the framework of centralized cloud computing, physical cache and virtual cache were designed to respond to video file storage and online transcoding computing respectively in reference [17]. Reference [18] proposed a rate adaptation algorithm that uses video characteristics to simultaneously change video encoding and transmission rate, which improves the amount of video resources that the network can tolerate. Reference [19] modeled the cache management of video stream files as a constrained optimization problem considering the server storage resource constraints. Reference [20] verified an online architecture capable of using Docker for real-time video transcoding in a cloud environment based on Kubernetes. The random forest regressor used in this framework provided the best overall performance in terms of transcoding speed, resource CPU consumption and accuracy of the number of transcoding tasks implemented, but the work efficiency of reinforcement learning is low. However, in the centralized cloud computing mode, user video requests are sent to users after caching and encoding/decoding operations on the cloud platform. Thus, in order to ensure users' QoE, the cloud computing model has high requirements on network bandwidth, hardware conditions of storage and computing equipment, and has the disadvantage of high construction and maintenance costs [21].
Decentralized edge computing solution: In this solution, video files with different bit rates and formats are first stored on multiple edge computing devices close to the user side. When a user requests a video service with a certain bit rate and format, the edge computing device implements video caching, codec, and transmission operations in a cooperative manner [22,23]. Compared with the centralized cloud computing model, the network backhaul time between edge computing devices and users is shorter, and the hardware performance requirements are also lower. But it should be pointed out that the implementation of ABR technology is diverse under edge computing schemes. For example, for the same video file request, edge computing devices can directly transmit by local caches, cache from the neighbor nodes and then transmit, decode from the neighbor nodes and then cache and then transmit, and cache from the neighbor nodes and then decode and then transmit. Similar work was shown in [24]. Summary, a video cache and content distribution mechanism with good performance must be flexibly adjusted according to network conditions, network topology, edge device working status, etc. to obtain satisfactory QoE in a multi-edge collaborative computing environment [25].
The existing work has carried out preliminary research on the video caching and content distribution mechanism in multi-edge collaborative computing environment, and has achieved some beneficial results. For example, reference [26] proposed an adaptive wireless video transcoding framework in the emerging edge computing mode to achieve more detailed video transcoding. However, this solution inevitably increased the further occupation of computing resources while tracking changes in traffic. Reference [27] considered the collaboration between multiple edge servers. However, there was no cooperative transcoding service between edge servers. In other words, all video transcoding operations were performed on local servers, which requires high server computing performance and storage capacity. Reference [28] considered the collaborative caching and transcoding between edge servers. However, this solution only considered single-hop mode in which edge servers perform video caching and transcoding between only two edge servers.
Based on the above analysis, this paper proposes a multi-edge collaborative video caching and content distribution mechanism based on random integer programming. The main innovations are as follows: 1) Considering that video caching and content distribution can complete caching and transcoding operations on different edge servers, a video caching and content distribution mechanism including two-hop cooperation of edge servers is proposed. Compared with the traditional single-hop cooperation mode, it only considers two edge servers and the considered caching and distribution mechanism is more general; 2) Based on the stochastic optimization method, with the user QoE as optimization goal, video cache and content distribution problems are modeled as random integer linear programming problems. Among them, the video cache in edge devices fully takes into account the popularity of videos, which further improves the hit rate of video storage and the corresponding QoE.
3) In order to improve the security of video data transmission, the video stream is encrypted using an encryption algorithm based on Logistic chaotic-Quantum-dot Cellular Automata (QCA). 4) In order to improve the efficiency of solving integer programming problems, a pyramid structure intelligent evolution algorithm based on optimal cooperation strategy is proposed to solve the problem.

System model
As shown in Fig. 1, let the number of edge servers be N. It can be cached from the video resource library in remote servers, and can also perform codec operations. Two-way data exchange between high-speed links between edge servers. At the same time, it can also be directly linked to remote servers by the backhaul link. The cache and codec operations of edge servers are subject to scheduling and control of the control center.

Video coding encryption technology
The rapid development of communication technology provides users with diversified and differentiated video requirements. At the same time, the importance of video information transmission security to both video providers and users cannot be ignored. At present, the industry generally uses H.264/AVC encoding standard to compress and transmit videos with low distortion. This paper uses an encryption algorithm based on Logistic chaotic-Quantum-dot Cellular Automata (QCA) [29] to encrypt video coding. The flow of Logistic chaotic-QCA key generation algorithm is shown in Fig. 2. Among them, Quantum Cellular Neural Network (QCNN) matrix A is obtained by QCA using Logistic chaotic system for h consecutive iterations. h is a multiple of 512, and Logistic chaotic system is shown in formula (1).
Split matrix A into matrix B composed of the first 3 rows of elements and matrix C composed of the last row of elements, that is The matrix B is further processed and converted into a row vector to form a key sequence, denoted as S, as shown in the following formula The elements in S are in order, and each 512 forms a group to form a chaotic sequence pool H, as shown in Eq. (7).
where L is an integer between (0, 3n/512]. For the elements in sequence C, the index sequence Index is generated according to formula (9): where map min max(C, 0, 1) means to map the value in sequence C to interval [0, 1]. Index(i) and Index(j) are selected from matrix Index as the initial values and Index_Log1 and Index_Log2 are generated by perform Logistic transformation. And further we round up according to formula (10) to map to the integers IndexC_Log1 and IndexC_Log2 in interval [1, L].
Finally, the key generation is selected from key sequence S according to IndexC_Log1 and IndexC_Log2, and the key is obtained by comparing bit by bit according to formula (11), until a 512-bit complete key is obtained.
The original video is encoded with H.264/AVC and contains two types of compressed video data and residual data [30]. In order to improve the reliability of video transmission, this paper uses different keys to encrypt these two types of data. That is, for the compressed video data, the first 256 bits of the Key are used for encryption. For residual data, the last 256 bits of the Key are used for encryption.

Multi-rate video cache model
Let collection V = {1, 2, …, s, …, S} be the video collection. Each video can be encoded into M different versions of files. The definition set v s = {v sm | m = 1, 2, …, M} is represented as a variant set of the s video file. For video files, we can use bit rate and playback duration to characterize. In addition, note that for the same video, the playback duration of different variant files is the same. Therefore, video file v sm can be described by the following binary vector, namely v sm : r sm ; l s ð Þ ð12Þ where r sm and l s are the bit rate and playing time of v sm respectively. Without loss of generality, let each variant file in v s be stored in ascending order of bit rate, that is, r s1 < r s2 < … < r sM . Besides, low bit rate files can be transcoded from high bit rate files. Let each edge server have a cache user of size C to store a copy of the video, and C is greater than the size of the video corresponding to maximum bit rate, i.e.
where α is a coefficient greater than 1. Based on the above analysis, first it assume that the cache capacity of each edge server is limited. However, video files of any bit rate can be cached to meet the video requests of different users. However, it should be noted that although videos generally have multiple versions with different bit rates. However, considering user QoE and its own network conditions, the current mainstream commercial streaming media system usually adaptively adjusts the transmitted video files to a level that matches the current network conditions. Thus, as shown in Fig. 3, the video caching strategy in this paper is: When a user k(=1, 2, …, K) requests video v s , if no edge server within k single-hop communication range caches a video copy of any bit rate of video v s , then k will directly use the lowest bit rate that the current network can afford. The remote server directly caches video v s . Conversely, user k will cache the highest bit rate version cached by edge servers within the single-hop communication range.

Video distribution strategy design
Under the edge server collaborative computing framework, user video requests can be obtained from a remote server or an edge server directly connected to users by a single-hop connection, and can also be obtained by other edge server transcoding operations. Figure 4 shows all possible distribution methods when a user requests a 360p version of a video file.
Combining with Fig. 4, the following 8 binary variables are introduced to characterize the feasible video distribution scheme in multi-edge collaborative computing environment: 1) a sm n ðtÞ ¼ 1 means that users directly obtain video v sm from the cache area of edge servers connected to it with a single hop (may be set to n, the same below); if not, a sm n ðtÞ ¼ 0, as shown in Fig. 4(a); 2) b sm n ðtÞ ¼ 1 means that video v sm requested by users is obtained by transcoding operation of the high-bit rate version by edge server n connected to it with a single hop; if not, then b sm n ðtÞ ¼ 0, as shown in Fig. 4(b); 3) c sm nn 0 ðtÞ ¼ 1 means that video v sm requested by users is directly obtained from edge server n ′ (n ≠ n ′ ); if not, then c sm nn 0 ðtÞ ¼ 0, as shown in Fig. 4(c); 4) d sm nn 0 ðtÞ ¼ 1 means that video v sm requested by users is obtained by transcoding from the higher version in edge server n ′ (n ≠ n ′ ); if not, then d sm nn 0 ðtÞ ¼ 0, as shown in Fig. 4(d); 5) e smm 0 nn 0 ðtÞ ¼ 1 indicates that video v sm requested by users is first obtained by edge server n from edge server n ′ (n ≠ n ′ ), and then obtained after performing a forwarding operation locally; if not, e smm 0 nn 0 ðtÞ ¼ 0, as shown in Fig. 4(e); 6) f sm nn 0 ðtÞ ¼ 1 indicates that video v sm requested by users is first obtained by edge server n ′ (n ≠ n ′ ) from edge server n with a high bit rate version. Then edge server n ′ (n ≠ n ′ ) performs the forwarding operation and returns it to edge server n; if not, f sm nn 0 ðtÞ ¼ 0, as shown in Fig. 4(f). It should be noted that there is a working mechanism of the two-hop cooperation of edge servers in this distribution mode; 7) g smm 0 nn 0 n ″ ðtÞ ¼ 1 indicates that video v sm requested by users is first obtained by edge server n ″ from edge server n ′ and then transcoded to edge server n; if not, then g smm 0 nn 0 n ″ ðtÞ ¼ 1, as shown in Fig. 4(g). It should be noted that in this distribution mode, there is also a working mechanism of the two-hop cooperation of edge servers; 8) h sm (t) = 1 means that video v sm requested by users is obtained from a remote server; if not, h sm (t) = 0 is shown in Fig. 4(h).

Problem modeling
Problem objective and QoE function The problem of video caching and content distribution is modeled as a constrained optimization problem. First, define the following cache strategy set: The above set indicates that r bit rate version of video v is cached in edge server n. According to the adaptive bit rate caching strategy designed in section 2.1, the bit rate selected by user k can be expressed as where Further, the delay in distributing video content is discussed. Let τ n0 and τ nn 0 be the unit delay when edge server n obtains video from remote servers and edge server n ′ respectively. Generally, τ n0 ≫τ nn 0 . It should be noted that because of the limited network bandwidth between edge servers, the network topology between τ nn 0 and edge servers is highly correlated. Let τ T be the delay of video forwarding operations. The delays under different distribution strategies are given below. Fig. 4(a), the content access delay is τ n 1 ¼ 0, that is, it can be directly transmitted to users without buffering from other servers; 2) For the distribution mode shown in Fig. 4(b), the content access delay is

1) For the distribution mode shown in
3) For the distribution mode shown in Fig. 4(c), the content access delay is 4) For the distribution mode shown in Fig. 4(d), the content access delay is  Fig. 4(e), the content access delay is

5) For the distribution mode shown in
where m ′ > m (the same below); 6) For the distribution mode shown in Fig. 4(f), the content access delay is 7) For the distribution mode shown in Fig. 4(g), the content access delay is 8) For the distribution mode shown in Fig. 4(h), the content access delay is Therefore, for a certain bit rate video, the content access delay is Moreover, since the video popularity follows Zipf distribution [31], that is, the video service requested by users often has a positive correlation with popularity. Therefore, edge servers often choose more popular videos for caching in order to shorten the time delay when acquiring from remote servers. Thus, the following QoE function is defined in this paper where γ s is the popularity of video v s .

Constraints
Considering the existence of "cache before transmission", "transcoding high bit rate version to low bit rate version" principles of videos, the limited computing and storage resources and the uniqueness of distribution strategies, video cache and content distribution constraints in the actual process can be divided into the following five categories.

1) Execution order constraints
All video requests from any user first need to verify whether the video file is cached on edge servers. Thus, the binary decision variable δ sm n ¼ 1 defined as follows indicates that there are m bit rate copies of s video files on edge server n, otherwise it is 0. Therefore, there is

2) Transcoding order constraints
For distribution modes 2 and 4 involving transcoding operations, there is a unidirectional operation for transcoding from a high bit rate version to a low bit rate version. So there is b sm n t ð Þ≤ min 1;

3) Edge server storage capacity constraints
For edge server n, the size of video files it stores will not exceed its storage limit, that is X n∈N X m∈M δ sm n r sm l s ≤ C n ð32Þ where C n is the storage capacity of edge server n.

4) Edge server computing capacity constraints.
Under the multi-edge server collaboration framework, it is considered that each edge server not only needs to process the video transcoding operations of users connected to its single hop, but also assists other edge servers in transcoding operations. And all video transcoding operations should not exceed the maximum capacity they can handle. Let the number of requests for video v sm from edge server n be N sm n ðtÞ. β n is the unit bit transcoding time of edge server n, and T max n is the maximum computing delay of edge server n. So there are:

5) Unique constraint of distribution strategy
When any user's video request arrives, the control center can only give a distribution strategy to respond to the user's request, namely Thus, the multi-edge collaborative computing video caching and content distribution strategy mentioned in this paper can be modeled as the following integer optimization model with constraints:

Problem analysis
Integer optimization is an NP-hard problem [32], and so far no effective general method has appeared.
Often, heuristic search algorithms [33,34] and branch and bound methods [35] are needed to approximate the global optimal solution. Besides, the model built in Section 3.3 is more accurately called a random integer optimization problem. This is because the exact number of user video requests, N sm n ðtÞ, cannot be obtained in advance. However, on a longer time scale, the number N sm n ðtÞ of user requests exhibits a selfsimilar characteristic. Therefore, this paper proposes the following strategies: 1) First, the number fN sm n ðtÞgðm ¼ 1; 2; …; mÞ of historical user requests constitutes a set, and each element N sm n ðtÞ in the set satisfies: Changes in the number of video requests from a small range of users will not have a significant impact on QoE of all users. Thus, without loss of generality, the number of running scenarios of edge servers can be set as: where ⌈⌉ is rounded up and λ n is the division interval. Therefore, the running scene can be numbered 1, 2, …, Num n according to the number N sm n ðtÞ of user video requests in ascending order. Therefore, for N edge servers, the number of possible running scenarios is It should be pointed out that the division operation can effectively reduce the number of running scenarios, thereby reducing the difficulty of solving optimization model.
2) For each running scenario, N sm n ðtÞ is taken as the upper bound of current interval. Therefore, the random integer optimization model in Section 3.3 degenerates into the classic integer programming model. However, considering that there are too many decision variables involved in this problem and there is a competitive relationship (such as the consumption of computing resources by local decoding and collaborative decoding), it may take too long to solve using classic solvers such as CPLEX. Thus, this paper further adopts pyramid evolution intelligent evolution algorithm based on optimal cooperation to solve this problem.
According to the above two steps, the best video caching and content distribution strategy in different scenarios can be formed and stored in the control center in an offline form. In actual application, the best caching and distribution strategy can be selected in the form of table look-up only by the number of current user video requests. It should be pointed out that the advantage of this scheme is that it avoids extra calculation time overhead introduced when solving the online optimal strategy. The problem complexity of table lookup method is O(n), and it still has high computational efficiency even when the number of running scenarios is large.

Pyramid group intelligent evolutionary solution algorithm based on optimal cooperation
The block diagram of group intelligent evolutionary solution algorithm based on pyramid structure is shown in Fig. 5. The global optimal solution is mainly composed of the following formulas: x qþ1 Equation (26) sorts in ascending order according to the size of fitness value F of population X to obtain the sorted fitness value F ′ and index I. The sorted group X 1 is divided into four parts according to formula (38) and formula (39). The number of individuals in each part satisfies Eq. (40), that is, the number of groups X 1 composed of excellent individuals is the smallest, and the number of groups X 4 composed of the worst individuals is the largest. Group X 1 is called mining layer, X 2 and X 3 are called transfer layers, and X 4 is called exploration layer. The individual x q p of each layer updates the group according to different search neighborhoods R q p according to Eq. (41) in each layer. The neighborhood size of each layer satisfies formula (42), that is, the mining layer generates a search step according to random number σ between [− 1,1] in the smaller neighborhood. The point is to complete the mining work of population, and the exploration layer is to mine potential outstanding individuals in a large neighborhood. As the number of iterations q progresses, the population of each generation gradually approaches the global optimal solution. Therefore, the search radius R q p of each layer is also adaptively updated with contraction factor 0 < μ < 1 according to Eq. (43), thereby improving the optimization efficiency. After generating new individuals in each layer, the algorithm collaborates between layers, that is, the excellent individuals in exploration layer and the excellent individuals in transfer layer are transferred to mining layer and exploration layer respectively. The transferred individuals are cultivated in receiving layer, and these individuals are accelerated according to formula (44) with the acceleration step θ along the direction for generation of new individuals to obtain accelerated individual x.
The standard PES algorithm includes two kinds of collaboration, one is layer-to-layer group collaboration, which strengthens the communication between populations. One is the collaboration between individuals in the layer and parent individuals. Although parent individuals have a certain role in guiding the generation of new individuals in the offspring, it is not very helpful in producing excellent individuals. This will cause the convergence speed of algorithm to be slow and affect the operating efficiency of algorithm application. Particle swarm algorithm completes the updating of population by the individual extreme value and global extreme value, which has the advantage of fast convergence speed [36]. This update rule just makes up for the lack of cooperation among individuals in pyramid structure intelligent evolution algorithm. This paper integrates this idea into the standard pyramid structure intelligent evolution algorithm. By selecting the cooperation with the optimal individual of population in the current generation layer and optimal individual of the entire population to complete the update for each layer of individuals. A pyramid-based intelligent evolution algorithm based on the optimal cooperation strategy is proposed, its update rules for each layer of individuals are as follows: where x qþ1 p is the new individual produced by the p layer of the q generation, and x q p is the parent individual of the p layer. rand refers to a random number between [0,1], R q p is the search radius of current generation. pBes t q p is the optimal individual of the p layer of the q generation, and gBest q is the optimal individual of entire population for the q generation. 0 < w < 1 reflects the size of the new individual's bias toward individual's pBest q p direction, and 1 − w reflects the degree of bias toward gBest q direction. Based on the strategy of optimal cooperation, the parent individual can be searched and explored along the direction of joint force generated by the individual extreme value and global extreme value. This not only strengthens the mutual cooperation among individuals in the layer, but also establishes the connection between individuals in each layer and the globally optimal individual. Under the strategy of layer-to-layer collaboration and optimal cooperation among individuals, the convergence speed of pyramid structure intelligent evolution algorithm is accelerated, and the optimization efficiency of this algorithm is improved. Therefore, in this paper, the steps of applying pyramid structure intelligent evolution algorithm to solve the optimal strategy for video caching and content distribution are as follows.
Step 1: Initialize parameters, set the maximum number of iterations I max and population size G, and initial search radius R q p of each layer; Step 2: Randomly generate the initial population {x 0 } of population size G and juxtapose the number of iterations t = 1; Step 3: Calculate the fitness value of population {x q } (i.e. the current QoE function). According to the size of fitness value J(x q ), the population is divided into four sub-populations X q (q = 1, 2, 3, 4), and the contemporary individual extremum pBest q p and global extremum pBes t q p are recorded; Step 4: According to Eq. (34), generate a new individual x qþ1 p for individual x q p of the q(q = 1, 2, 3, 4) layer. And select the corresponding number of individuals from each layer to pass to the upper layer, and complete the cultivation according to formula (34). Integrate the generated new individuals, passing individuals and parent individuals, and select individuals with the corresponding number of populations as updated group fx qþ1 p g of this layer.
Step 5: Integrate updated population fx q p g of each layer into new population fx qþ1 p g generated by the q generation.
Step 6: Determine the termination iteration condition. When the number of iterations reaches the maximum number of iterations t = I max , then output gBest q and algorithm stops. Otherwise, set t = t + 1 and update the search radius R q p of each layer. And we suppose that each version of the same video has the same request probability. Also set τ n0 = 100ms, τ nn 0 is evenly distributed in [5,50] ms interval, the number of edge servers is 10. The unit bit transcoding time β n reference value is uniformly set to 2 μs, and the edge server cache capacity α reference value is set to 50. The maximum calculation delay T max n allowed by servers is set to 150 ms.

Experimental results and discussion
In addition, comparison algorithms are selected as the JCPNonCo scheme proposed in reference [18], the CCNonP scheme proposed in reference [27] and the APCP-OptRs scheme proposed in reference [28].

Parameter performance analysis Transcoding capability of edge servers
The transcoding capability of edge server β n is closely related to the CPU performance of edge server itself. In other words, β n reflects the performance of edge servers to a large extent. Besides, the smaller β n is, the higher CPU performance is, and the stronger the transcoding capability for videos is. Figure 6 shows the change relationship of QoE function when the unit transcoding time β n changes from 0.5 μs in steps of 0.5 μs to 5 μs. The overall change trend is that the value of QoE function gradually increases with the increase of β n . However, when β n changes from 0.5 μs to 3 μs, the QoE change range is relatively slow, after β n exceeds 3 μs, the QoE function value rises sharply. The reason is that when β n is smaller, the transcoding capability of each server is stronger. Therefore, there are more options for video caching and content distribution strategies. When β n is large, due to the long processing delay of a single server, it is no longer the optimal option to provide coordinated transcoding capabilities to other edge servers. Edge servers tend to cache videos at all bit rates to meet users' video experience. However, it should be noted that lower transcoding time means higher performance requirements of CPU, which will cause problems of rising economic costs.

Cache capacity of edge server
Analyzing the impact of edge server cache capacity (that is, α in Eq. (2)) on QoE performance, Fig. 7 shows the change in the value of QoE function when α changes from 10 to 50 and the step size is 5. When α changes from 10 to 35, QoE gradually decreases as the value of α changes. However, when the value of α exceeds 35, the QoE performance no longer changes significantly. Further define the storage hit ratio as the ratio of sum for video types requested by users to the sum of video types locally cached by edge servers. Figure 8 shows the relationship between the storage hit ratio and cache capacity α. It can be seen that as the cache capacity of edge servers gradually increases, the storage hit rate gradually increases. When α exceeds 25, the storage hit rate increases to 100% and does not change with the increase of cache capacity. This is because when the cache capacity of edge servers is large, more resource copies can be stored on local edge servers. It no need to request resources from the cloud or adjacent edge servers or rely on other edge server nodes to assist in transcoding. Thus, the numerical results show that increasing the cache capacity of edge servers can only improve the QoE of users to a limited extent, and at the same time will bring about the problem of economic cost.

QoE comparison of different strategies
Set Zipf distribution parameter γ s = 0.75. Figure 9 shows the value of QoE under different algorithms when the number of edge servers changes from 1 to 15.
As can be seen from Fig. 9, when the number of edge servers is only 1, there is almost no difference in the QoE values of all algorithms. This is because there is no edge server collaboration mechanism at this time, and all video caching and transcoding operations are performed on the same server. With the increase in the number of edge servers, the collaboration mechanism among multiple edge servers is not considered in reference [18]. That is, each edge server only exchanges data with a remote server and does not cooperate with other edge servers, so QoE value is maintained at about 260. References [27,28] and the algorithm proposed in this paper all have a cooperative relationship due to their existence. Therefore, the QoE value gradually decreases as Fig. 7 Relationship between QoE function and edge server cache capacity α Fig. 8 Relationship between storage hit rate and cache capacity α the number of edge servers increases. However, compared with the two-hop mode involving three edge servers in this paper, the collaborative transcoding service is not provided between edge servers in reference [27] and only edge servers in video cache is considered in reference [28]. Single-hop mode with transcoding only between two edge servers. Thus, the proposed algorithm provides more feasible solutions for caching and content distribution. Correspondingly, the decrease of QoE value is improved by 22.84% and 14.40% compared with the algorithms proposed in reference [27] and reference [28]. Suppose the number of edge servers is 10, and Fig. 10 shows QoE function value when the Zipf distribution parameter γ s changes. No loss of generality, the Zipf distribution parameter changes from 0.3 to 0.9 in steps of 0.1. Regardless of this method, QoE function shows a similar trend. That is, as the distribution parameters change, QoE function is decreasing. This shows that caching popular videos on the edge server can significantly improve users' video request satisfaction. However, when γ s changing from 0.3 to 0.9, the algorithm proposed in this paper has a decrease rate of 35.22%, 16.40% and 2.16% compared with references [18,27,28]. This is because this paper additionally considers a two-hop collaboration scenario among three edge servers when designing a video content distribution strategy, thereby providing additional strategy options.

Comparison of average access delay of different strategies
Further, the relevant algorithm performance test is conducted in Opnet environment. Figure 11 shows the average access delay of users under each edge server under different video caching and content distribution strategies when edge server is 10.
It can be seen from Fig. 11 that the algorithm proposed in reference [18] does not consider collaboration between edge servers, so the average access delay of users remains unchanged and is the largest of the four methods. References [27,28] and our proposed algorithm all have a cooperative relationship, so the average user access delay is shorter than the method described in reference [18]. The proposed algorithm further considers the video caching and transcoding operations in a three-hop cooperative manner among three edge servers, which is essentially the expansion and derivation of references [27,28]. This provides additional decisionmaking options for control center, thereby making the multi-edge server work in a more efficient state. Therefore, the access delay of users is shortened by 45.21%, 24.66% and 14.06% respectively, compared with references [18,27,28], and the average shortening is 27.98%.

Conclusion
This paper proposes a video caching and content distribution mechanism in a multi-edge collaborative computing environment. Based on the definition of user QoE,  the video caching and content distribution mechanism is modeled as a random integer programming problem, and is solved using an intelligent evolution algorithm based on optimal cooperation pyramid structure. Experimental examples show that our proposed algorithm enriches the decision-making options for video caching and content distribution strategies due to considering two-hop collaboration scenarios among three edge services. Besides, it also guarantees that edge servers have high computational efficiency and objective storage space, so that it has a better engineering application in practical background. The established model is offline working mode, thus we will further study the real-time online multi-edge server collaboration video caching and content distribution strategy in the future.