Skip to main content

Advances, Systems and Applications

Blockchain-based 6G task offloading and cooperative computing resource allocation study

Abstract

In the upcoming era of 6G, the accelerated development of the Internet of Everything and high-speed communication is poised to provide people with an efficient and intelligent life experience. However, the exponential growth in data traffic is expected to pose substantial task processing challenges. Relying solely on the computational resources of individual devices may struggle to meet the demand for low latency. Additionally, the lack of trust between different devices poses a limitation to the development of 6G networks. In response to this issue, this study proposes a blockchain-based 6G task offloading and collaborative computational resource allocation (CERMTOB) algorithm. The proposed first designs a blockchain-based 6G cloud-network-edge collaborative task offloading model. It incorporates a blockchain network on the edge layer to improve trust between terminals and blockchain nodes. Subsequently, the optimization objective is established to minimize the total latency of offloading, computation, and blockchain consensus. The optimal offloading scheme is determined using the wolf fish collaborative search algorithm(WF-CSA) to minimize the total delay. Simulation results show that the WF-CSA algorithm significantly reduces the total delay by up to 42.58% compared to the fish swarm algorithm, wolf pack algorithm and binary particle swarm optimisation algorithm. Furthermore, the introduction of blockchain to the cloud-side-end offloading system improves the communication success rate by a maximum of 14.93% compared to the blockchain-free system.

Introduction

With the ongoing advancements in communication technology and science, the sixth generation (6G) of mobile communication technology has emerged as a forefront technology, poised to interconnect everything in the future [1,2,3]. 6G networks are anticipated to enable unprecedented mobile broadband speeds, low-latency communications, and massive terminal connectivity. This positions 6G as a catalyst for technological developments in various fields, including autonomous driving, smart cities, virtual reality, and automated industries [4,5,6]. However, the huge resource requirements associated with handling large-scale networked devices, real-time applications, and high-speed data transfers present significant obstacles to the rational allocation and efficient use of resources. In this case, traditional resource allocation methods may face the challenge of high transmission latency. Thus the urgent need to reduce latency by improving resource utilisation efficiency and enabling flexible offloading in 6G networks has become apparent. Addressing these core issues has become a key research direction in the field of 6G resource allocation [7,8,9,10].

Researchers and scholars have conducted extensive research in the field of 6G computing resource allocation. Prathiba et al. [11] proposed a resource management algorithm, aiming to offload and manage resources with low latency. This algorithm uses a stochastic network algorithm to calculate the upper limit latency of heterogeneous communication systems and determine the probability associated with the offloading mechanism. Computational tasks are then offloaded to the optimal network based on the detected probability values. Lin et al. [12] designed a 6G large-scale IoT architecture that facilitates dynamic resource allocation and introduced a resource allocation algorithm based on artificial intelligence. They have additionally devised a dynamic nested neural network aimed at facilitating online adaptation of the learning model structure to effectively address the evolving demands of dynamic resource allocation. Qin et al. [13] introduced a novel 6G resource allocation framework centered around air-heaven-airspace integration. They addressed the intricate issue of triple matching among equipment, content sources, and users within air-heaven-airspace integrated networks through content-centric and client-focused resource allocation techniques. Goudarzi et al. [14] introduced a computational resource allocation model designed to address the joint optimization challenge of queue-based computational offloading and adaptive computational resource allocation. This method prioritizes maintaining task computation latency for all ground mobile node (MNs) over a defined time frame. Simultaneously meeting the task computation constraints, it seeks to maximize the overall reachability of MNs while minimizing energy consumption for both UAVs and MNs. Gong et al. [15] proposed an innovative framework in the field of communication, employing deep reinforcement learning (DRL) to facilitate task offloading decomposition. Meanwhile the task offloading and resource allocation processes are optimized collaboratively through the Isotonic Action Generation Technique (IAGT) and the dynamic update strategy. Qi et al. [16] employs network duals as central controllers in combination with crowdsourcing techniques to incentivize mobile users to adhere to predefined paths for sharing network resources. The creation of these paths is depicted as an optimization problem in user recruitment with cost constraints. Initially, the study focuses on a scenario where only one mobile user offers network resources, presenting a pseudo-polynomial time algorithm. Subsequently, for the more intricate scenario involving multiple mobile users, a solution based on graph partitioning is proposed. Lastly, the study delves into determining the minimum expected budget needed to maximize utility within the ideal model. However, these studies mentioned above do not address the trust issues resulting from massive user access, where the data being transmitted may be at risk of being eavesdropped, leaked, or falsified. Therefore, ensuring secure, trustworthy, and efficient collaboration in 6G resource allocation and computational offloading has become a key issue that needs urgent attention [17, 18].

Blockchain plays a significant role in achieving security and trustworthiness in resource allocation. As a distributed shared ledger, its tamper-proof, open, and transparent, and decentralized features can effectively ensure the establishment of trust, protect the privacy of information, and achieve reliable authentication of devices [19,20,21]. In the 6G communication environment, the integration of blockchain technology with resource allocation offers important advantages. Blockchain can effectively store the computational tasks triggered by resource allocation as transaction information in the block. This transaction information is used to generate the Merkle root through a hashing algorithm, while the block’s Pre_hash is calculated from the Merkle root, timestamps, random numbers, and other information. Blockchain ensures the data integrity of each block by connecting the blocks to form a chain structure through Pre_hash [22, 23]. Meanwhile, blockchain ensures the legitimacy of end devices’ identity and trustworthiness through a consensus mechanism, effectively reducing the threat of unauthorized devices. The consensus mechanism verifies and ensures the legitimate identity of end devices through the consistent cognition of nodes in the network, guards against unauthorized devices entering the system, and enhances the overall trustworthiness of the resource allocation process [24,25,26]. Therefore, realizing the efficient combination of blockchain and 6G resource allocation has become a major hotspot in current research.

Additionally, both domestic and international research on blockchain-based 6G resource allocation has yielded some results. Yao et al. [27] proposed a blockchain-enabled cloud-edge device (BC-CED) offloading scheme for computational collaboration tasks. These scheme addresses the offloading problem through reinforcement learning and incorporates an incentive mechanism to ensure the honesty of the device. Okegbile et al. [28] investigated collaborative data sharing schemes aimed at facilitating collaboration among multiple data providers and users through the integration of blockchain and cloud-edge computing. The results showed that system performance analysis contributes to the effective deployment of a data sharing system. Li et al. [29] constructed a cloud-side-end collaborative resource allocation framework, addressing the system energy consumption and delay minimization problem. They obtained the optimal policy using collective reinforcement learning (CRL) by co-optimizing the offloading policies, group intervals, and transmission power. Feng et al. [30] introduced a blockchain-enabled mobile edge computing resource allocation framework aimed at enhancing computation rates and boosting transaction throughput. This framework achieves its goals by concurrently optimizing offloading policies, power allocation, block size, and block interval. Jain et al [31] introduced a novel approach for resource allocation in IoE environments and 6G networks utilizing blockchain technology. They devised a quasi-opposite search and rescue optimization (QO-SRO) algorithm aimed at enhancing the efficiency of resource allocation processes. Although the above literature has yielded promising results in the research area, it has not considered the problem of collaborative offloading of computational tasks to other base stations for processing, nor has it considered the problem of communication interference in practical scenarios, as highlighted in Table 1.

Table 1 Differences between this paper and previous studies

To address these key issues, this study not only extends the offloading range of computational tasks to achieve collaborative task processing among multiple base stations, but also considers the communication interference problem. Through these comprehensive considerations, the overall resource utilisation of the system is improved, and the authenticity and reliability of the system evaluation is enhanced in a way that is closer to the actual communication environment.

The primary contributions of this study are outlined as follows:

  1. 1.

    Designing a blockchain-based collaborative task offloading model for 6G cloud network edges and ends, comprising multiple edge computing servers, communication base stations, user terminals, and a cloud server. Simultaneously, each communication base station and MEC server is regarded as a blockchain node, forming a blockchain system. A blockchain network layer is added to the cloud-side-end cooperative offloading, enabling the offloading of computational tasks to other base stations with available computational resources for collaborative computation. The model also selects the most trustworthy D nodes for consensus by calculating the trust value of end devices to blockchain nodes. This approach ensures sufficient computational resources, reduces task processing time, and greatly enhances the security and trustworthiness of the resource allocation process.

  2. 2.

    Proposing a blockchain-based 6G task offloading and collaborative computational resource allocation algorithm (CERMTOB, Cloud-Edge Resource Management and Task Offloading in Blockchain Networks). The algorithm addresses the minimization delay problem, which involves the total task offloading delay, the total computational task processing delay, and the blockchain network layer computational delay. To solve this issue, WF-CSA(wolf fish collaborative search algorithm) is proposed. WF-CSA divides the fish into head fish, explorer fish, and fierce fish, assigning the seven behaviors of wolf pack and fish pack according to their characteristics. This methodology expedites the convergence rate of the algorithm, successfully accomplishing the goal of minimizing time delay.

  3. 3.

    The simulation results demonstrate that the WF-CSA algorithm achieves a notable reduction in total delay compared to AFSA,WPA and BPSO, with improvements of up to 42.58%, 28.58% and 15.93%, respectively, across varying task sizes. Additionally, WF-CSA demonstrates superior performance with varying numbers of users and computing power of MEC servers. Meanwhile, the cloud-side end offloading system integrated with blockchain improves the communication success rate by up to 14.93% compared to the cloud-side end offloading system without blockchain.

System model

Network model

The blockchain-based 6G cloud network edge-end collaborative task offloading model proposed in this study is shown in Fig. 1. The system model comprises a cloud server layer, a blockchain network layer, an edge layer, and a user terminal layer. The cloud server layer includes a cloud server CS.The edge server layer contains M base stations, denoted by the set BS, \(BS=\{B{{\text {S}}_{1}},B{{\text {S}}_{2}},\cdots ,B{{\text {S}}_{m}},\cdots ,B{{S}_{M}}\}\). Each base station is equipped with an MEC server, and the corresponding MEC server for base station \(B{{\text {S}}_{m}}\) is denoted as \(ME{{C}_{m}}\). The set M MEC servers is denoted as MEC, \(MEC=\{ME{{C}_{1}},ME{{C}_{2}},\cdots ,ME{{C}_{m}},\cdots ,ME{{C}_{M}}\}\). In the coverage area of each base station, there are N user terminals, and the user terminals covered by base station \(B{{S}_{m}}\) are denoted as \(U{{E}^{m}}=\{UE_{1}^{m},UE_{2}^{m},\cdots ,UE_{n}^{m},\ \cdots ,UE_{N}^{m}\}\). Therefore, the user terminal layer comprises a total of \(M\times N\) user terminals.

Fig. 1
figure 1

The blockchain-based 6G cloud network edge-end collaborative task offloading model

In the blockchain network layer, each base station and its MEC server are treated as distinct blockchain nodes, containing a total of M blockchain nodes. Let \(V_{n\rightarrow m}^{trust}\) represent the confidence value of user terminal n to blockchain node m. The consensus mechanism selects the most trustworthy \(D(D<M)\) blockchain nodes from the total blockchain nodes, while non-selected nodes are only responsible for receiving data and bookkeeping. This design aims to mitigate the threat of unauthorized devices through consensus algorithms, thereby improving the security and trustworthiness of resource allocation and computation offloading. Details of the symbols used in the paper are shown in Table 2 for reference.

Table 2 Summary of notations

Offloading model

In this study, it is assumed that a fine-grained computation task \(T_{n}^{m}\) is generated at a given time on a user terminal \(UE_{n}^{m}\) situated in the coverage area of base station \(B{{S}_{m}}\). This task can be partitioned into multiple subtasks.\(T_{n}^{m}\) can be expressed as a ternary \(T_{n}^{m}=<d_{n}^{m},s_{n}^{m},\tau _{n}^{m}>\),\(d_{n}^{m}\) denotes the total input data size for computational task \(T_{n}^{m}\), unit is expressed in bits;\(s_{n}^{m}\) represents the computing power required to complete a unit of task data, expressed in cycles/bit;\(\tau _{n}^{m}\) indicates the deadline for task completion in s.

Due to the limited local computing power of the user terminal, completing the computing task within the deadline \(\tau _{n}^{m}\) is not feasible. Therefore, the task needs to be offloaded to the MEC server and cloud server for processing [32]. Simultaneously, if the MEC server is overwhelmed with computational tasks and other base stations have ample computational resources, tasks can be offloaded to those base stations for assistance. In this study, the partial offloading approach aims to optimize the utilization of computing resources at the user terminal layer, edge layer, and cloud server layer.The proportions of sub-tasks for local terminal computing, offloading to MEC server, and cloud server computing, respectively, are expressed by \(\alpha _{n}^{m},\beta _{n}^{m,1},\beta _{n}^{m,2},\cdots ,\beta _{n}^{m,m},\cdots ,\beta _{n}^{m,M},\gamma _{n}^{m}\),and \(\alpha _{n}^{m}+\beta _{n}^{m,1}+\beta _{n}^{m,2}+\cdots +\beta _{n}^{m,m}+\cdots +\beta _{n}^{m,M}+\gamma _{n}^{m}=1\).

Wireless communication links are used between user terminals and base stations, while fiber optics facilitate wired communication between base stations and also between base stations and cloud servers. In this study, Orthogonal Frequency Division Multiple Access (OFDMA) technique is used for uplink offloading of user terminals within the same base station. The communication bandwidth B of each base station is divided into N mutually orthogonal wireless communication subchannels, each with a bandwidth \(f=\frac{B}{N}\),and the set of communication subchannels is defined as \(f=\{{{f}_{1}},{{f}_{2}},\cdots ,{{f}_{n}},\cdots ,{{f}_{N}}\}\).Each subchannel is assigned to a user terminal for communication, ensuring that users within the range of the same base station experience no interference with each other [33]. Assuming that user terminals within the range of different base stations can multiplex the same wireless communication subchannel, \(UE_{n}^{1},UE_{n}^{2},\cdots ,UE_{n}^{m},\cdots ,UE_{n}^{M}\) denotes that user terminals n within M base stations multiplex the subchannel segment \({{f}_{n}}\).Therefore,\(UE_{n}^{m}\) will experience interference from user terminals from other base stations, denoted by

$$\begin{aligned} I_{n,m}^{{{f}_{n}}}=\sum \nolimits _{i\in U{{E}^{m}}\backslash \{UE_{n}^{m}\}}{\sum \nolimits _{j\in BS\backslash \{B{{S}_{m}}\}}{a_{ij}^{{{f}_{n}}}}}{{p}_{i}}g_{i,j}^{{{f}_{n}}} \end{aligned}$$
(1)

where \(a_{ij}^{{{f}_{n}}}\in \{0,1\}\),assuming that a terminal can only offload the task through one sub-channel, then the computation task \(T_{n}^{m}\) generated by user terminal i is offloaded to the MEC server j for computation using subchannel \({{f}_{n}}\) when \(a_{ij}^{{{f}_{n}}}=1\),otherwise.\(g_{i,j}^{{{f}_{n}}}\) indicates the channel gain between user terminal i and MEC server j,which is calculated from \(g_{i,j}^{{{f}_{n}}}={{\xi }_{i,j}}(t){{h}_{0}}{{({{d}_{0}}/{{d}_{i,j}})}^{\theta }}\),\({{\xi }_{n,m}}\) is the Rayleigh fading between user terminal i and MEC server j,\({{h}_{0}}\) denotes the path loss constant,\({{d}_{0}}\) is the reference distance, and \({{d}_{i,j}}\) represents the distance between user terminal i and MEC server j  [34].

Similarly, the signal-to-interference-plus-noise ratio for the offloading task of terminal \(UE_{n}^{m}\) is obtained as

$$\begin{aligned} SIN{{R}_{n,m}}=\frac{p_{n}^{m}g_{n.m}^{{{f}_{n}}}}{I_{n,m}^{{{f}_{n}}}+{{\sigma }^{2}}} \end{aligned}$$
(2)

where \({{\sigma }^{2}}\) is the background noise variance. Using Shannon’s formula, the transmission rate when terminal \(UE_{n}^{m}\) offloads the task can be obtained as

$$\begin{aligned} {{R}_{n,m,{{f}_{n}}}}{} & {} = W{{\log }_{2}}(1+SIN{{R}_{n,m}}) \nonumber \\{} & {} = W{{\log }_{2}}\left( 1+\frac{p_{n}^{m}g_{n.m}^{{{f}_{n}}}}{I_{n,m}^{{{f}_{n}}}+{{\sigma }^{2}}}\right) \end{aligned}$$
(3)

and the communication latency of the offloading task from terminal \(UE_{n}^{m}\) to MEC server is

$$\begin{aligned} {{T}_{n,m,{{f}_{n}}}}=\frac{(1-\alpha _{n}^{m})d_{n}^{m}}{{{R}_{n,m,{{f}_{n}}}}} \end{aligned}$$
(4)

The total communication latency of the terminal offload task to MEC server is

$$\begin{aligned} T_{mec}^{trans}=\sum \limits _{m=1}^{M}{\sum \limits _{n=1}^{N}{\frac{(1-\alpha _{n}^{m})d_{n}^{m}}{{{R}_{n,m,{{f}_{n}}}}}}} \end{aligned}$$
(5)

In this study, it is assumed that the transmission delay incurred between base stations utilizing fiber optic communication is negligible. The fixed transmission rate for fiber link communication between the MEC server and the cloud server can be denoted by \({{R}_{m,c}}\), and the loss in fiber optic transmission is ignored. Therefore, the transmission delay of the task offloading from base station \(B{{S}_{m}}\) to the cloud server is

$$\begin{aligned} {{T}_{m,c}}=\frac{\gamma _{n}^{m}d_{n}^{m}}{{{R}_{m,c}}} \end{aligned}$$
(6)

The total communication latency of the base station offloading tasks to the cloud server is calculated as

$$\begin{aligned} T_{cloud}^{trans}=\sum \limits _{m=1}^{M}{\sum \limits _{n=1}^{N}{\frac{\gamma _{n}^{m}d_{n}^{m}}{{{R}_{m,c}}}}} \end{aligned}$$
(7)

The total communication latency for task offloading is calculated as

$$\begin{aligned} {{T}^{trans}}=T_{mec}^{trans}+T_{cloud}^{trans} \end{aligned}$$
(8)

Computation model

Due to the different locations where each molecular task is processed, the computation modes can be categorized into three types: local computation mode, which is processed at local terminals; MEC server computation mode, which is processed on MEC servers; and cloud server computation mode, which is processed on cloud servers. The three computational modes consume computational resources from different endpoints and can, therefore, process tasks in parallel.

Local computation

Assuming that the CPU computation frequency of the nth user terminal within the range of base station \(B{{S}_{m}}\) is \(f_{n,m}^{local}\), according to the computational mandate \(T_{n}^{m}\) and the local offloading ratio \(\alpha _{n}^{m}\), the delay \(T_{n,m}^{local}\) required for local computation can be obtained as

$$\begin{aligned} T_{n,m}^{local}=\frac{\alpha _{n}^{m}d_{n}^{m}s_{n}^{m}}{f_{n,m}^{local}} \end{aligned}$$
(9)

This, in turn, gives the total delay of local computation as

$$\begin{aligned} T_{local}^{exe}=\sum \limits _{m=1}^{M}{\sum \limits _{n=1}^{N}{\frac{\alpha _{n}^{m}d_{n}^{m}s_{n}^{m}}{f_{n,m}^{local}}}} \end{aligned}$$
(10)

MEC server computing

Assuming that the CPU computing frequency assigned by the MEC server \(ME{{C}_{m}}\) to the computation task \(T_{n}^{m}\) subtask is \(f_{n,m}^{MEC}\), and according to the computational mandate \(T_{n}^{m}\) and the MEC offloading ratio \(\beta _{n}^{m,1},\beta _{n}^{m,2},\cdots ,\beta _{n}^{m,m},\cdots ,\beta _{n}^{m,M}\), the delay \(T_{n,m}^{MEC}\) required for the computation by the MEC server can be obtained as

$$\begin{aligned} T_{n,m}^{MEC}=\sum \limits _{i=1}^{M}{\frac{\beta _{n}^{m,i}d_{n}^{m}s_{n}^{m}}{f_{n,m}^{MEC}}} \end{aligned}$$
(11)

This, in turn, gives the total delay calculated by the MEC server as

$$\begin{aligned} T_{mec}^{exe}=\sum \limits _{m=1}^{M}{\sum \limits _{n=1}^{N}{\sum \limits _{i=1}^{M}{\frac{\beta _{n}^{m,i}d_{n}^{m}s_{n}^{m}}{f_{n,m}^{MEC}}}}} \end{aligned}$$
(12)

Cloud server computing

Assuming that the CPU computation frequency assigned by the cloud server CS to the computation task \(T_{n}^{m}\) subtask is \(f_{n,m}^{cloud}\), and based on the computation task \(T_{n}^{m}\) and the cloud server offload ratio \(\gamma _{n}^{m}\), the delay \(T_{n,m}^{cloud}\) required by the cloud server for computation can be obtained as

$$\begin{aligned} T_{n,m}^{cloud}=\frac{\gamma _{n}^{m}d_{n}^{m}s_{n}^{m}}{f_{n,m}^{cloud}} \end{aligned}$$
(13)

This, in turn, gives the total delay of the cloud server computation as

$$\begin{aligned} T_{cloud}^{exe}=\sum \limits _{m=1}^{M}{\sum \limits _{n=1}^{N}{\frac{\gamma _{n}^{m}d_{n}^{m}s_{n}^{m}}{f_{n,m}^{cloud}}}} \end{aligned}$$
(14)

In summary, the adoption of partial offloading facilitates concurrent processing of computational mandates, thereby reducing the overall processing latency, denoted as \({{T}^{exe}}\) of the computational tasks generated by the user terminal

$$\begin{aligned} {{T}^{exe}}=\max \left\{T_{local}^{exe},T_{mec}^{exe},T_{cloud}^{exe}\right\} \end{aligned}$$
(15)

Blockchain model

Since blockchain nodes with low trust values may experience packet loss during offloading, each end device should select nodes with higher trust values for communication to enhance the safety and stability of the communication. The confidence values of M blockchain nodes are calculated using the trust value calculation method in “Blockchain MEC system trust value Computation” section and arranged in descending order of trust value. The first D nodes with the highest trust values among them can be selected as consensus nodes.

Assuming that the consensus node of the blockchain employs practical byzantine fault tolerance (PBFT) as the consensus mechanism, the implementation steps are as follows [35]:

In the first step, the blockchain node collects the transaction recorded by the computational task offload from the edge layer. Upon receiving these transactions, the master node undertakes the verification process for both the signature and the message authentication code (MAC). Assuming equal CPU cycles are required for generating and verifying the signature and MAC, denoted as \(\varphi\) and \(\phi\), respectively, the computational cost of the master node is

$$\begin{aligned} {{h}_{1}}=\frac{\vartheta }{\theta }(\varphi +\phi ) \end{aligned}$$
(16)

where \(\vartheta\) represents the maximum capacity of transactions that can be included within a block, and \(\theta\) represents the weight of correct transactions.

In the second step, the master node dispatches a pre-prepare message to all sub-nodes. When the sub-node obtains a new block, it first verifies the signature and MAC of the block. Subsequently, it verifies the signature and MAC of the transaction. The computational cost of the sub-node during the pre-prepare process is

$$\begin{aligned} {{h}_{2}}=(\vartheta +1)(\varphi +\phi ) \end{aligned}$$
(17)

In the third step, each sub-node dispatches a prepare message to the rest of the sub-nodes. This node has to verify \(2f(f=(D-1)/3)\) signatures and MACs sent from the other sub-nodes, and the sub-node has to generate 1 signature and \(D-1\) MACs. The computational overhead incurred by the sub-nodes during the commit phase is

$$\begin{aligned} {{h}_{3}}=\varphi +(D-1)\phi +2f(\varphi +\phi ) \end{aligned}$$
(18)

In the fourth step, each sub-node sends a commit message to the rest of the sub-nodes, and the sub-nodes also need to verify 2f signature and MAC upon obtaining the commit message. The computational overhead of the sub-nodes in the commit phase is

$$\begin{aligned} {{h}_{4}}=\varphi +(D-1)\phi +2f(\varphi +\phi ) \end{aligned}$$
(19)

In the fifth step, the new block becomes a valid block after receiving 2f matching commit message and subsequently broadcasts it to the blockchain network layer, where the computational overhead of the sub-node is

$$\begin{aligned} {{h}_{5}}=\vartheta (\varphi +\phi ) \end{aligned}$$
(20)

Therefore, the total consensus delay at the blockchain network layer is

$$\begin{aligned} {{T}^{bt}}=\max \left\{ \frac{{{H}_{bt}}}{f_{d}^{bt}}\right\} \end{aligned}$$
(21)

where \({{H}_{bt}}={{h}_{1}}+{{h}_{2}}+{{h}_{3}}+{{h}_{4}}+{{h}_{5}}\) is the total computational overhead of the blockchain consensus process, and \(f_{d}^{bt}\) is the CPU cycle frequency of the dth consensus node.

Blockchain MEC system trust value computation

In this study, a comprehensive estimation method is used to determine the trustworthiness of consensus nodes, which includes both direct and indirect confidence factors. The direct confidence assessment is carried out using subjective logic, while the determination of indirect confidence involves soliciting opinions from third-party sources. It is assumed that the node confidence value is estimated according to a real number from 0 to 1. Similar to many related literatures, the critical value of trust is set to 0.5, and a node is considered credible when the confidence value is greater than 0.5; otherwise, it is not considered credible. The confidence value of a consensus node is calculated as described below [30].

Computing of direct confidence value

The computation of the direct confidence value is based on node honesty (NH) and node capacity. Therefore, a subjective logical framework must be employed to address the uncertainty in the task offloading process due to the inherent volatility and noise issues in the communication channel between the end device and the consensus node. Assume that the trust value of a terminal device \(UE_{n}^{m}\) to a communication base station \(B{{\text {S}}_{m}}\) is represented by the triad \(\omega _{n\rightarrow m}^{m}=\{b_{n\rightarrow m}^{m},d_{n\rightarrow m}^{m},v_{n\rightarrow m}^{m}\}\), where \(b_{n\rightarrow m}^{m}\),\(d_{n\rightarrow m}^{m}\),\(v_{n\rightarrow m}^{m}\)denote trust, distrust, and uncertainty in turn, and that the relationship among them is

$$\begin{aligned}{} & {} b_{n\rightarrow m}^{m},d_{n\rightarrow m}^{m},v_{n\rightarrow m}^{m}\in [0,1] \nonumber \\{} & {} b_{n\rightarrow m}^{m}+d_{n\rightarrow m}^{m}+v_{n\rightarrow m}^{m}=1 \end{aligned}$$
(22)

According to the confidence model in the literature, the node honesty NH can be obtained using the following equation

$$\begin{aligned} NH_{n\rightarrow m}^{m}=b_{n\rightarrow m}^{m}+\mu v_{n\rightarrow m}^{m} \end{aligned}$$
(23)

where \(0\le \mu \le 1\) is a constant representing the magnitude of the impact of confidence uncertainty, and

$$\begin{aligned}{} & {} b_{n\rightarrow m}^{m}=(1-v_{n\rightarrow m}^{m})\frac{\alpha _{n\rightarrow m}^{m}}{\alpha _{n\rightarrow m}^{m}+\beta _{n\rightarrow m}^{m}} \nonumber \\{} & {} d_{n\rightarrow m}^{m}=(1-v_{n\rightarrow m}^{m})\frac{\beta _{n\rightarrow m}^{m}}{\alpha _{n\rightarrow m}^{m}+\beta _{n\rightarrow m}^{m}} \nonumber \\{} & {} v_{n\rightarrow m}^{m}=1-s_{n\rightarrow m}^{m} \end{aligned}$$
(24)

where \(\alpha _{n\rightarrow m}^{m}\) and \(\beta _{n\rightarrow m}^{m}\) represent the number of completed and uncompleted communications, respectively, and \(s_{n\rightarrow m}^{m}\) represents the quality of the communication channel, indicating the chance of successful grouping.

$$\begin{aligned}{} & {} \overset{new}{\mathop {\alpha }}\,_{n\rightarrow m}^{m}=\alpha _{n\rightarrow m}^{m}+P_{n\rightarrow m}^{m}\times (\alpha _{n\rightarrow m}^{m}+\beta _{n\rightarrow m}^{m}) \nonumber \\{} & {} \overset{new}{\mathop {\beta }}\,_{n\rightarrow m}^{m}=\beta _{n\rightarrow m}^{m}-P_{n\rightarrow m}^{m}\times (\alpha _{n\rightarrow m}^{m}+\beta _{n\rightarrow m}^{m}) \end{aligned}$$
(25)

where \(P_{n\rightarrow m}^{m}\) denotes the chance of dropping the packet, which can be determined using the following equation:

$$\begin{aligned} P_{n\rightarrow m}^{m}=1-\frac{\sum \nolimits _{b}^{c}{{{\omega }_{m}}}(b)\times {{\omega }_{m}}(b)}{\sum \nolimits _{b}^{c}{{{\omega }_{m}}}(b)} \end{aligned}$$
(26)

where \({{\omega }_{m}}(b)\) denotes the weighting factor assigned to the historical link state of base station \(B{{\text {S}}_{m}}\), such that \(link={{\omega }_{m}}(1),{{\omega }_{m}}(2),\cdots ,{{\omega }_{m}}(b)\) represents the state of the historical link, and the weight value is derived from \({{\omega }_{m}}(b)=(2{{b}_{m}}/{{c}_{m}}({{c}_{m}}+1))\), where \({{b}_{m}}\) and \({{c}_{m}}\) represent the sequence number of \({{\omega }_{m}}(b)\) in the link and the link state sequence number.

Additionally, this study assumes that all base stations share identical initial energy loss rate and energy criterion, and malicious nodes usually lose energy abnormally when they attack. Therefore, this study uses energy as a measure of QoS confidence value as a way of determining whether a communication base station is malicious or not. Assuming that \(Q_{n\rightarrow m}^{m}\) uses the ray projection method to represent the energy loss rate (\(Q_{n\rightarrow m}^{m}\in [0,1]\)),the node capacity (NC) is therefore given by the following equation:

$$\begin{aligned} NC_{n\rightarrow m}^{m}=\left\{ \begin{array}{ll} 1-Q_{n\rightarrow m}^{m}, &{} \text {if } E_{n\rightarrow m}^{m} \ge \psi \\ 0, &{} \text {otherwise} \end{array}\right. \end{aligned}$$
(27)

where \(E_{n\rightarrow m}^{m}\) and \(\psi\) denote the remaining energy and energy threshold of a blockchain node, respectively.

To summarize, node confidence depends on NH and NC, and the direct confidence value of a blockchain node can be denoted as

$$\begin{aligned} V_{n\rightarrow m}^{\text {direct}}=\left\{ \begin{array}{ll} 0.5 + (NH_{n\rightarrow m}^{m}-0.5) \times NC_{n\rightarrow m}^{m}, &{} \text {if } NH_{n\rightarrow m}^{m} \ge 0.5 \\ NH_{n\rightarrow m}^{m} \times NC_{n\rightarrow m}^{m}, &{} \text {otherwise} \end{array}\right. \end{aligned}$$
(28)

Computation of indirect confidence value

Recommendations from third-party blockchain network layers also need to be considered to derive trust values. In this study, we assume that the reserve blockchain nodes agree to dedicate their resources to assist the end devices in offloading and computing tasks. When the end devices want to offload utilizing blockchain nodes, the reserve nodes around them will apply to the blockchain network layer for the opportunity to assist in the offloading task. Once the application is received, the blockchain network layer selects a appropriate reserve node by evaluating the suggested values associated with each potential reserve node . In this study, we assume that the blockchain updates and saves the recommendations of the reserve nodes in a timely manner. However, not every updated recommendation is trustworthy. If the selection is done based on the recommendations updated by the preparatory nodes at a time, untrustworthy preparatory nodes may be selected, leading to unreliable trust estimates. Therefore, it is necessary to verify whether the recommendation is trustworthy or not.

In this study, we present a simple approach that relies on the recommended reliability \(R_{n\rightarrow m}^{rel}\) to evaluate the recommended values. The method calculates the average value \(R_{m}^{ave}\) of all update recommendations for the reserve node m and determines the disparity of this average and the specific recommendation. The greater the disparity, the less reliable the recommendation is. Eventually, the reliability \(R_{n\rightarrow m}^{rel}\) of the recommendation can be expressed as:

$$\begin{aligned} R_{n\rightarrow m}^{rel}=1-\left| R_{n\rightarrow m}^{rec,i}-R_{m}^{ave} \right| \end{aligned}$$
(29)

where \(R_{n\rightarrow m}^{rec,i}\) is the suggested value for the ith update within the blockchain network layer.

If the recommender’s recommendation credibility falls below 0.5, regardless of the magnitude of the recommendation value, we still cannot conclude that the recommendation is trustworthy based on this alone, and the final recommendation trust value should consider the credibility of the recommendation and the recommendation value:

$$\begin{aligned} R_{n\rightarrow m}^{recom}=\frac{\sum \nolimits _{i=1}^{I}{R_{n\rightarrow m}^{rel}\times R_{n\rightarrow m}^{rec,i}}}{I} \end{aligned}$$
(30)

where I represent the number of times the recommended value is updated. Subsequently, the confidence value of the blockchain node can then be derived as:

$$\begin{aligned} V_{n\rightarrow m}^{\text {trust}} = \left\{ \begin{array}{ll} V_{n\rightarrow m}^{\text {direct}}, &{} \text {if } \alpha _{n\rightarrow m}^{\text {new}} \ge \text {T}h_{\text {num}} \\ \omega _{\text {direct}}V_{n\rightarrow m}^{\text {direct}} + \omega _{\text {recom}}R_{n\rightarrow m}^{\text {recom}}, &{} \text {otherwise} \end{array}\right. \end{aligned}$$
(31)

where \({{\omega }_{direct}}\) denotes the weight of the direct value,\({{\omega }_{recom}}\) denotes the weight of the recommended value, \({\text {Th}}_{num}\) denotes the count of interactions occurring from the recommender to the blockchain network layer,\({{\omega }_{direct}}\in [0,1],{{\omega }_{recom}}\in [0,1],{{\omega }_{direct}}+{{\omega }_{recom}}=1\).

Issue modeling

In this study, we raise an optimization problem whose objective is to minimize the latency by considering the total task offloading latency, the total computational task processing latency, and the blockchain network layer computational latency. The problem function can be represented as

$$\begin{aligned} F{} & {} = {T^{trans}}+{T^{exe}}+{T^{bt}} \nonumber \\{} & {} = \sum _{m=1}^{M}\sum _{n=1}^{N}\frac{(1-\alpha _{n}^{m})d_{n}^{m}}{{{R}_{n,m,{f_{n}}}}}+\sum _{m=1}^{M}\sum _{n=1}^{N}\frac{\gamma _{n}^{m}d_{n}^{m}}{{{R}_{m,c}}}+ \nonumber \\{} & {} \max \left\{ \sum _{m=1}^{M}\sum _{n=1}^{N}\frac{\alpha _{n}^{m}d_{n}^{m}s_{n}^{m}}{f_{n,m}^{local}},\sum _{m=1}^{M}\sum _{n=1}^{N}\sum _{i=1}^{M}\frac{\beta _{n}^{m,i}d_{n}^{m}s_{n}^{m}}{f_{n,m}^{MEC}},\right. \nonumber \\{} & {} \left. \sum _{m=1}^{M}\sum _{n=1}^{N}\frac{\gamma _{n}^{m}d_{n}^{m}s_{n}^{m}}{f_{n,m}^{cloud}}\right\} \nonumber \\{} & {} + \max \left\{ \frac{{(h_{1}+h_{2}+h_{3}+h_{4}+h_{5})}}{{f_{d}^{bt}}}\right\} \end{aligned}$$
(32)

In turn, the proposed minimization delay function is obtained as

$$\begin{aligned}{} & {} \text {Minimize} \quad F \nonumber \\{} & {} \text {s.t.}\quad \text {C1: } \alpha _{n}^{m} + \beta _{n}^{m,1} + \cdots + \beta _{n}^{m,i} + \cdots + \beta _{n}^{m,M} + \gamma _{n}^{m} = 1 \nonumber \\{} & {} \quad \quad \text {C2: } \alpha _{n}^{m} \in [0,1], \beta _{n}^{m,i} \in [0,1], \gamma _{n}^{m} \in [0,1] \nonumber \\{} & {} \quad \quad \text {C3: } {T^{trans}} + {T^{exe}} + {T^{bt}} \le \tau _{n}^{m} \end{aligned}$$
(33)

where C1 represents that the proportions of subtasks for local terminal computation, offloading to the MEC server, and cloud server computation sum up to 1; C2 ensures that the proportions of subtasks for local computation, offloading to the MEC server, and cloud server computation are within the range of 0 to 1; and C3 ensures that the total latency for local, MEC server, and cloud server computation, offloading, and consensus must be less than the task completion cut-off time.

Solving based on WF-CSA optimization algorithm

An individual artificial fish swarm algorithm exhibits some randomness during the solution process, resulting in slow convergence and susceptibility to local optimum solution. To address this issue, this study introduces the wolf pack algorithm into the artificial fish swarm algorithm to enhance its global search capability. The algorithm achieves faster convergence and improved solutions through collaboration and information exchange among the wolves. In the wolf pack algorithm, problem-solving is based on the hunting skills of wolves, with each artificial wolf representing a feasible solution and the prey odor concentration corresponding to the fitness value of the objective function. The pack comprises three different roles: head wolf, scout wolf, and fierce wolf . Similarly, the artificial fish are divided into head fish, scout fish, and fierce fish.

The fish swarm algorithm incorporates four behaviors: foraging, aggregation, tail chasing, and random [36], while the wolf pack algorithm includes three behaviors: wandering, summoning, and siege [37]. Despite differences, there are similarities between these behaviors. In this study, the head fish is designated to perform aggregation and tail chasing behaviors, the explorer fish to perform foraging, wandering, and random behaviors, and the fierce fish to perform summoning and siege behaviors. This strategic assignment of behaviors aims to enhance the algorithm’s exploration of the solution space more efficiently.

Headfish behavior

Aggregation behavior

Search for other neighboring fish \({{x}_{j}}({{d}_{ij}}<Visual)\) within the field of view of the headfish,\({{x}_{i}}\) where the total number of neighboring fish is \({{n}_{f}}\).If \({{n}_{f}}>0\),calculate the center position \({{x}_{c}}\) and the corresponding fitness value \({{y}_{c}}\) within the field of view of the fierce fish. If \({{{y}_{c}}}/{{{n}_{f}}}\;>\delta {{y}_{i}}\)(where \(\delta\) is the crowding factor), the headfish \({{x}_{i}}\) swims one step toward the center position of the neighboring fish

$$\begin{aligned} {{x}_{c}}=\sum \limits _{j=1}^{{{n}_{f}}}{{{{x}_{j}}}/{{{n}_{f}}}\;} \end{aligned}$$
(34)
$$\begin{aligned} {{x}_{next}}={{x}_{i}}+\frac{{{x}_{c}}-{{x}_{i}}}{\left\| {{x}_{c}}-{{x}_{i}} \right\| }\cdot Step\cdot Rand() \end{aligned}$$
(35)

If the step size is a static constant, increasing the Step is beneficial to promote fish convergence, but too large a Step will slow down the iteration speed. Compared to a static constant moving step, this study uses a dynamically updated moving step for artificial fish to accelerate convergence while improving the accuracy and stability of the algorithm. To prevent the step size from decreasing to 0, rendering the algorithm’s inability to find a more optimal solution, the minimum value of the moving step size is set to \(\tau\). Simultaneously, the number of fitness changes V is introduced with an initial value of 0, and the value of V increases by 1 every time an artificial fish updates its optimal fitness.

When the algorithm initially starts running, the move step size Step is set to the default size, and a threshold \({{N}_{2}}\) for dynamic move step splitting is defined. When the number of iterations of the algorithm reaches a threshold, in the subsequent \({{T}_{1}}\cdot {{N}_{2}}({{T}_{1}}>1)\) iterations, the fish use Step given by the following equation

$$\begin{aligned} Step = \left\{ \begin{array}{ll} Step \div \rho \quad \text {, } {N}_{2} \le V \le {N}_{2} \cdot {T}_{2} \\ Step \times \rho \quad \text {, } V > {N}_{2} \cdot {T}_{2} \end{array} \right. \end{aligned}$$
(36)

where \({{T}_{2}}\) is a constant, and \(1<{{T}_{2}}<{{T}_{1}}\),\(\rho\) are step factors and \(\rho \in (0,1)\). When the value of Step after updating is less than the minimum value \(\tau\), make the value of Step at this point \(\tau\), until the number of iterations reaches \({{T}_{1}}\cdot {{N}_{2}}\) then return V to zero.

Tail chasing behavior

Search for other neighboring fish \({{x}_{j}}({{d}_{ij}}<Visual)\) within the field of view of the headfish, \({{x}_{i}}\) and the total number of neighboring fish is \({{n}_{f}}\). If \({{n}_{f}}>0\), search for the nearby artificial fish \({{x}_{\max }}\) with the largest fitness value \({{y}_{\max }}\). If \({{{y}_{\max }}}/{{{n}_{\max }}}\;>\delta {{y}_{i}}\), the headfish \({{x}_{i}}\) swims one step toward the artificial fish \({{x}_{\max }}\).

$$\begin{aligned} {{x}_{next}}={{x}_{i}}+\frac{{{x}_{next}}-{{x}_{i}}}{\left\| {{x}_{next}}-{{x}_{i}} \right\| }\cdot Step\cdot Rand() \end{aligned}$$
(37)

Scout fish behavior

Foraging behavior

The scout fish use visualization to sense the concentration of food and thus determine the direction of approach. Assuming that the location and fitness value of the ith scout fish are \({{x}_{i}}\) and \({{y}_{i}}\) respectively, and a different location in the scout fishs’ field of view and its fitness value are \({{x}_{j}}\)and \({{y}_{j}}\), respectively, if \({{y}_{j}}>{{y}_{i}}\), swim one step in the direction of the new location; otherwise, look for a different location again to determine whether the condition can be satisfied, and if it is not yet satisfied. if it is not yet satisfied, move one step randomly. The scout fish \({{x}_{i}}\) randomly determines a location \({{x}_{j}}\) in its field of view, as given in the following equation:

$$\begin{aligned} {{x}_{next}}={{x}_{i}}+\frac{{{x}_{j}}-{{x}_{i}}}{\left\| {{x}_{j}}-{{x}_{i}} \right\| }\cdot Step\cdot Rand() \end{aligned}$$
(38)
$$\begin{aligned} {{x}_{next}}={{x}_{i}}+Visual\cdot Rand() \end{aligned}$$
(39)

Wandering behavior

\({{x}_{i}}\) is the location of the scout fish i in dimension d. The scout fish advances in h directions with a wandering step \(ste{{p}_{a}}\) and records the food concentration after advancing in each direction. Subsequently, it returns to its original position. Therefore, the position of the scout fish after moving one step in the pth \((p=1,2,\cdots ,h)\) orientation is

$$\begin{aligned} {{x}_{next}}={{x}_{i}}+\sin \left(2\pi \frac{p}{h}\right)\times ste{{p}_{a}} \end{aligned}$$
(40)

Random behavior

The scout fish typically swim irregularly through the water, aiming to expand their range of motion to search for food and make companions more efficiently.

$$\begin{aligned} {{x}_{next}}={{x}_{i}}+Visual\cdot Rand() \end{aligned}$$
(41)

Fierce fish behavior

Summoning behavior

As the artificial fish with the largest fitness value, the headfish will call out to surrounding fierce fish to take a raiding step \(ste{{p}_{b}}\) to swim rapidly to its position. If the concentration of food found by the fierce fish is greater than the concentration of food perceived by the headfish, the headfish will update and re-summon the fierce fish, otherwise, the fierce fish will continue to remain on the run, i.e.

$$\begin{aligned} {{x}_{next}}={{x}_{i}}+ste{{p}_{b}}\times \frac{g-{{x}_{i}}}{\left| g-{{x}_{i}} \right| } \end{aligned}$$
(42)

where \({{x}_{i}}\) represents the present location of the ith fierce fish, and g represents the present location of the head wolf.

Siege behavior

The fierce fish closer to the headfish will hunt for food in concert with the scout fish. If the artificial fish perceives a food concentration greater than that at the previous location, the scout fish will hunt for food at a hunting step \(ste{{p}_{c}}\). Assuming that the location of the food is G and \(\lambda\) is a random number between [-1,1], the behavior of the fish siege is denoted as

$$\begin{aligned} {{x}_{next}}={{x}_{i}}+\lambda \times ste{{p}_{c}}\times \left| G-{{x}_{i}} \right| \end{aligned}$$
(43)

The relationship between the wandering step length \(ste{{p}_{a}}\), raiding step length \(ste{{p}_{b}}\), and hunting step length \(ste{{p}_{c}}\) is shown in the following equation:

$$\begin{aligned} ste{{p}_{a}}=\frac{ste{{p}_{b}}}{2}=2ste{{p}_{c}}=\frac{\left| {{\max }_{d}}-{{\min }_{d}} \right| }{S} \end{aligned}$$
(44)

where S is the step factor, and \([{{\max }_{d}}-{{\min }_{d}}]\) is the range of values of the variable to be solved in the dth dimension.

Algorithm parameter improvement and process

Survival mechanism for the strong fish

Since the strong artificial fish will be allocated food preferentially, they are more likely to have a chance of survival compared to the weak artificial fish. Therefore, in the optimization algorithm, the R artificial fish with the lowest fitness value are eliminated in each round, and then R new artificial fish are generated to replenish them.

Dynamic field of view range Visual

In this study, it is presumed that the artificial fish’s field of view is continuous, enabling the expansion of the optimization search interval in the early stage of the algorithm when the field of view is large. However, due to the increase of local optimal points in the later stages of the algorithm, too large a value of Visual may pose an obstacle to the convergence of the algorithm. Therefore, the value of Visual should be reduced appropriately. This adjustment aims to find the optimal fitness value through fewer iterations. The expression for Visual is as follows:

$$\begin{aligned} Visual = \left\{ \begin{array}{l} \varepsilon \times Visual \quad \text {, } {N}_{j} \ge {N}_{1} \\ Visual \quad \text {, } 1< {N}_{j} < {N}_{1} \end{array}\right. \end{aligned}$$
(45)

where \({{N}_{j}}\) represents the present number of iterations of the algorithm, \({{N}_{1}}\) is the threshold for dynamic segmentation of the field of view range, \(\varepsilon\) denotes the attenuation coefficient, and\(\varepsilon \in (0,1)\).

Crowding factor

The crowding factor \(\delta\) represents the degree of crowding that the fish population can accommodate; when \(\delta\) is large, it increases the search range of artificial fish, while when \(\delta\) is small, it helps in localized range search. Therefore, the spectrum of values attributed to \(\delta\) significantly influences the algorithm’s convergence. In this study, a nonlinear dynamic adjustment strategy of inertia weights is employed so that the crowding degree factor can improve the global optimization seeking ability of the fish population during the dynamic transition from large to small. The representation of the crowding degree factor is as follows:

$$\begin{aligned} \delta =2z\times r-z \end{aligned}$$
(46)

where \(z=2-2\left(\frac{2{{N}_{j}}}{{{N}_{\max }}}-\frac{{{N}_{j}}}{{{N}_{\max }}}{{}^{2}}\right)\) denotes the control parameter,\({{N}_{j}}\) and \({{N}_{\max }}\) represent the present iteration count and the maximum allowable iterations, respectively. As the number of iterations increase, it gradually decreases from 2 to 0, denoting a random number between [0,1]. Algorithm 1 summarizes the wolf fish school cooperative search algorithm.

figure a

Algorithm 1 Wolf Fish Collaborative Search Algorithm(WF-CSA)

Simulation

In this study, we examine a model consisting of a cloud server layer, a blockchain network layer, an edge layer, and a user terminal layer. The cloud server is located at the center of a 100m \(\times\) 100m, and five base stations are evenly distributed in the same area, each paired with a corresponding MEC server. The system contains a total of 50 user terminals randomly distributed within a 10m radius of each base station.In this paper, we set the simulation parameters based on the research results on blockchain consensus in the literature  [29] and the research on cloud-side collaboration partial offloading in reference  [38]. The validity of the WF-CSA algorithm has been confirmed through simulation, employing specific parameters delineated in Table 3.

Table 3 Simulation parameters
Fig. 2
figure 2

The Correlation between Iteration Count and Optimal Fitness Value

Figure 2 illustrates the trend of fitness values with increasing number of iterations under different algorithms. The horizontal coordinate indicates the number of iterations and the vertical coordinate indicates the best fitness value. The blue line denotes WF-CSA, the red line denotes WPA (Wolf Pack Algorithm), the yellow line denotes AFSA (Artificial Fish Swarm Algorithm) and the purple line denotes BPSO [39] (Binary Particle Swarm Optimisation). The graphs show a clear trend: the fitness values of all four algorithms gradually decrease as the number of iterations increases.The WF-CSA algorithm converges to lower fitness values faster with fewer iterations. This is because in the WF-CSA algorithm, the position of the headfish first undergoes aggregation and tail chasing behaviours to obtain a more optimal position. The optimised position of the headfish can coordinate the behaviour of the whole school more effectively and guide the scout fish and fierce fish to explore in a more favourable direction in a more organised and targeted way. Meanwhile compared to the WPA,AFSA and BPSO algorithms, the WF-CSA algorithm introduces more behavioural strategies, allowing the algorithm to explore the solution space in a more comprehensive way rather than being limited to specific search directions.The WF-CSA algorithm converges to lower fitness values faster with fewer iterations.

Fig. 3
figure 3

Total time delay vs. number of users

Figure 3 shows the trend of total delay with increasing number of users for different algorithms. The horizontal axis represents the number of users and the vertical axis represents the total delay. It is clear from the figure that the total delay under different algorithms tends to increase as the number of users increases. This is due to the fact that the more the number of users, the more the computational tasks are, and the processing of these tasks leads to an increase in computational latency, which in turn leads to a rise in the total latency. Therefore, when performing the actual system implementation, in order to keep the total delay of the system unchanged, the impact of the increase in the number of users can be offset by increasing the number of base stations and MEC servers at the same time. In addition, the complexity of the whole system can be reduced by increasing the computational power of the MEC servers, thus improving the scalability of the system in terms of the number of user terminals. By comparing the curves of different algorithms, it is found that the total delay obtained by the WF-CSA algorithm can be reduced by as much as 53.48%, 36.1% and 25.66% compared to the WPA, AFSA and BPSO algorithms, respectively.

Fig. 4
figure 4

Total time delay vs. size of task

Figure 4 shows the trend of total delay with an increasing task size under different algorithms. The horizontal coordinate denotes the task size, and the vertical coordinate denotes the total delay. As observed from the figure, the total delay under different algorithms tends to increase as the task size increases. This is due to the task size increase, leading to a higher volume of data processing and computation. Additionally, the communication overhead between different nodes also increases, resulting in a larger total delay. When comparing the curves of different algorithms, WF-CSA reduces the total delay by as much as 42.58%, 28.58% and 15.93% as compared to AFSA, WPA and BPSO respectively.

Fig. 5
figure 5

Total time delay vs. size of CPU cycle frequency of MEC server

Figure 5 demonstrates the trend of total delay with an increasing CPU cycle frequency of the MEC server under different algorithms. The horizontal coordinate represents the CPU cycle frequency size of the MEC server, and the vertical coordinate represents the total delay. From the figure, it is evident that the total delay under different algorithms shows a decreasing trend as the CPU cycle frequency of the MEC server increases. This phenomena is attributed to the fact that with a higher CPU cycle frequency, the number of computational instructions executed per second also increases, resulting in improved processing speed of the tasks. Meanwhile, a higher CPU frequency may enhance task scheduling, making it more flexible and efficient, further reducing the total latency. Compared with AFSA, WPA and BPSO, the total delay obtained by WF-CSA can be reduced by up to 56.67%, 42.37% and 22.42%, respectively.

Fig. 6
figure 6

Communication success rate vs. count of malicious nodes

Figure 6 shows the trend of communication success rate as the number of malicious nodes increases in the cloud offloading system with and without blockchain. The horizontal axis represents the number of malicious nodes and the vertical axis represents the communication success rate. The figure shows that the communication success rate shows a significant decreasing trend as the number of malicious nodes increases. This is mainly due to the fact that malicious nodes may perform denial-of-service attacks, causing service interruptions and thus jeopardising the security and reliability of the system. Reducing the attacks of malicious nodes on the system through the blockchain PBFT consensus mechanism can effectively improve the communication success rate of the system, so the system designed in this paper has good scalability in dealing with the number of malicious nodes. The integration of blockchain into the cloud-side end offloading system can increase the communication success rate by 14.93% compared to the same system without blockchain.

Conclusion

In this study, we propose a collaborative task offloading model for 6G cloud network edge-end, incorporating multiple edge computing servers, communication base stations, user terminals, and a cloud server. The model enables the partial offloading of terminal device computational tasks to the MEC server and the cloud server for computation, or collaborative computation with other base stations having available computational resources . This model aims to reduce task processing time, ensure sufficient computing resources, and improve communication security and stability through the blockchain consensus mechanism.

Furthermore, this study proposes the CERMTOB algorithm, which formulates the minimization delay problem by incorporating the total task offloading delay, the total computational task processing latency, and the consensus delay of the blockchain network layer. The WF-CSA algorithm is used to optimal the offloading decisions with the aim of minimizing overall delay. The simulation results show the validity of the WF-CSA algorithm, showcasing reductions in total delay by up to 42.58%, 28.58% and 15.93% compared to the AFSA, WPA and BPSO algorithms across different task sizes. Furthermore, WF-CSA algorithm demonstrates superior results in scenarios involving changes in the count of users and the computational capacity of the MEC server. Additionally, the incorporation of blockchain in the cloud-side end offloading system contributes to an improved communication success rate, achieving improvements of up to 14.93% compared a system without blockchain.

Availability of data and materials

No datasets were generated or analysed during the current study.

Abbreviations

CERMTOB:

Cloud-Edge Resource Management and Task Offloading in Blockchain Networks

WF-CSA:

Wolf fish collaborative search algorithm

MN:

Mobile node

DRL:

Deep reinforcement learning

IAGT:

Isotonic Action Generation Technique

BC-CED:

blockchain-enabled cloud-edge device

CRL:

collective reinforcement learning

QO-SRO:

quasi-opposite search and rescue optimization

MDP:

Markov Decision Process

RL:

Reinforcement Learning

A3C:

Asynchronous advantage actor-critic

OFDMA:

Orthogonal Frequency Division Multiple Access

CS:

Cloud server

BS:

Base station

PBFT:

practical byzantine fault tolerance

MAC:

Message authentication code

NH:

Node honesty

NC:

Node capacity

WPA:

Wolf Pack Algorithm

AFSA:

Artificial Fish Swarm Algorithm

BPSO:

Binary Particle Swarm Optimization

References

  1. Jiang W, Han B, Habibi MA, Schotten HD (2021) The road towards 6G: a comprehensive survey. IEEE Open J Commun Soc 2:334–366. https://doi.org/10.1109/OJCOMS.2021.3057679

    Article  Google Scholar 

  2. Zhang H, Shlezinger N, Guidi F, Dardari D, Eldar YC (2023) 6G Wireless Communications: From Far-Field Beam Steering to Near-Field Beam Focusing. IEEE Commun Mag 61(4):72–77. https://doi.org/10.1109/MCOM.001.2200259

    Article  Google Scholar 

  3. Qi L, Liu Y, Zhang Y, Xu X, Bilal M, Song H (2022) Privacy-Aware Point-of-Interest Category Recommendation in Internet of Things. IEEE Internet Things J 9(21):21398–21408. https://doi.org/10.1109/JIOT.2022.3181136

    Article  Google Scholar 

  4. Bharathiraja N, Shobana M, Vijay Anand M, Lathamanju R, Shanmuganathan C, Arulkumar V (2023) A secure and effective diffused framework for intelligent routing in transportation systems. Int J Comput Appl Technol 71(4):363–370. https://doi.org/10.1504/IJCAT.2023.132405

    Article  Google Scholar 

  5. Nagu B, Arjunan T, Bangare ML, Karuppaiah P, Kaur G, Bhatt MW (2023) Ultra-low latency communication technology for Augmented Reality application in mobile periphery computing. J Behav Robot 14(1):20220112. https://doi.org/10.1515/pjbr-2022-0112

    Article  Google Scholar 

  6. Banerjee A, Sufyanf F, Nayel MS, Sagar S (2018) Centralized Framework for Controlling Heterogeneous Appliances in a Smart Home Environment. International Conference on Information and Computer Technologies(ICICT), pp 78-82. https://doi.org/10.1109/INFOCT.2018.8356844

  7. Sufyan F, Banerjee A (2023) Computation Offloading for Smart Devices in Fog-Cloud Queuing System. IETE J Res 69(3):1509–1521. https://doi.org/10.1080/03772063.2020.1870876

    Article  Google Scholar 

  8. Sufyan F, Banerjee A (2019) Comparative Analysis of Network Libraries for Offloading Efficiency in Mobile Cloud Environment. International Journal of Advanced Computer Science and Applications 10(2): 574-584. https://doi.org/10.14569/IJACSA.2019.0100272

  9. Sufyan F, Banerjee A (2020) Computation Offloading for Distributed Mobile Edge Computing Network: A Multiobjective Approach. IEEE Access 8:149915–149930. https://doi.org/10.1109/ACCESS.2020.3016046

    Article  Google Scholar 

  10. Punia U, Batra T, Jindal U, Bharathiraja N, Tiwari RG, Pradeepa K (2023) An Improved Scheduling Algorithm for Grey Wolf Fitness Task Enrichment with Cloud. 2023 5th International Conference on Smart Systems and Inventive Technology (ICSSIT), pp 806-811. https://doi.org/10.1109/ICSSIT55814.2023.10061152

  11. Prathiba SB, Raja G, Anbalagan S, Dev K, Gurumoorthy S, Sankaran AP (2022) Federated Learning Empowered Computation Offloading and Resource Management in 6G–V2X. IEEE Trans Network Sci Eng 9(5):3234–3243. https://doi.org/10.1109/TNSE.2021.3103124

    Article  Google Scholar 

  12. Lin K, Li Y, Zhang Q, Fortino G (2021) AI-Driven Collaborative Resource Allocation for Task Execution in 6G-Enabled Massive IoT. IEEE Internet Things J 8(7):5264–5273. https://doi.org/10.1109/JIOT.2021.3051031

    Article  Google Scholar 

  13. Qin P, Wang M, Zhao X, Geng S (2023) Content Service Oriented Resource Allocation for Space-Air-Ground Integrated 6G Networks: A Three-Sided Cyclic Matching Approach. IEEE Internet Things J 10(1):828–839. https://doi.org/10.1109/JIOT.2022.3203793

    Article  Google Scholar 

  14. Goudarzi S, Soleymani SA, Wang W, Xiao P (2023) UAV-Enabled Mobile Edge Computing for Resource Allocation Using Cooperative Evolutionary Computation. IEEE Trans Aerosp Electron Syst 59(5):5134–5147. https://doi.org/10.1109/TAES.2023.3251967

    Article  Google Scholar 

  15. Gong Y, Yao H, Wang J, Li M, Guo S (2022) Edge Intelligence-driven Joint Offloading and Resource Allocation for Future 6G Industrial Internet of Things. IEEE Trans Network Sci Eng. https://doi.org/10.1109/TNSE.2022.3141728

    Article  Google Scholar 

  16. Qi L, Xu X, Wu X, Ni Q, Yuan Y, Zhang X (2023) Digital-Twin-Enabled 6G Mobile Network Video Streaming Using Mobile Crowdsourcing. IEEE J Sel Areas Commun 41(10):3161–3174. https://doi.org/10.1109/JSAC.2023.3310077

    Article  Google Scholar 

  17. Ravindhar NV, Sasikumar S, Bharathiraja N (2024) Integration of cloud-based scheme with industrial wireless sensor network for data publishing in privacy of point source. Int J Comput Appl Technol 13(2):124–138. https://doi.org/10.1504/IJCC.2024.137408

    Article  Google Scholar 

  18. Pandithurai O et al (2023) A Secured Industrial Wireless IoT Sensor Network Enabled Quick Transmission of Data with a Prototype Study. J Intell Fuzzy Syst 3445–3460. https://doi.org/10.3233/JIFS-224174

  19. Xu X, Zhang X, Gao H, Xue Y, Qi L, Dou W (2020) BeCome: Blockchain-Enabled Computation Offloading for IoT in Mobile Edge Computing. IEEE Trans Ind Inf 16(6):4187–4195. https://doi.org/10.1109/TII.2019.2936869

    Article  Google Scholar 

  20. Cao B et al (2023) Blockchain Systems, Technologies, and Applications: A Methodology Perspective. IEEE Commun Surv Tutorials 25(1):353–385. https://doi.org/10.1109/COMST.2022.3204702

    Article  MathSciNet  Google Scholar 

  21. Chishti MS, Sufyan F, Banerjee A (2021) Decentralized On-Chain Data Access via Smart Contracts in Ethereum Blockchain. IEEE Trans Netw Serv Manage 19(1):174–187. https://doi.org/10.1109/TNSM.2021.3120912

    Article  Google Scholar 

  22. Huo R et al (2022) A Comprehensive Survey on Blockchain in Industrial Internet of Things: Motivations, Research Progresses, and Future Challenges. IEEE Commun Surv Tutorials 24(1):88–122. https://doi.org/10.1109/COMST.2022.3141490

    Article  MathSciNet  Google Scholar 

  23. Chen H, Luo X, Shi L, Cao Y, Zhang Y (2023) Security challenges and defense approaches for blockchain-based services from a full-stack architecture perspective. Blockchain: Res Appl 4(3):100135. https://doi.org/10.1016/j.bcra.2023.100135

  24. Xu X, Gu J, Yan H, Liu W, Qi L, Zhou X (2023) Reputation-Aware Supplier Assessment for Blockchain-Enabled Supply Chain in Industry 4.0. IEEE Trans Ind Inf 19(4):5485-5494. https://doi.org/10.1109/TII.2022.3190380

  25. Xiao Y, Zhang N, Lou W, Hou YT (2020) A Survey of Distributed Consensus Protocols for Blockchain Networks. IEEE Commun Surv Tutorials 22(2):1432–1465. https://doi.org/10.1109/COMST.2020.2969706

    Article  Google Scholar 

  26. Xu J, Wang C, Jia X (2023) A survey of blockchain consensus protocols. ACM Comput Surv 55(278):1–35. https://doi.org/10.1145/3579845

    Article  Google Scholar 

  27. Yao S et al (2022) Blockchain-Empowered Collaborative Task Offloading for Cloud-Edge-Device Computing. IEEE J Sel Areas Commun 40(12):3485–3500. https://doi.org/10.1109/JSAC.2022.3213358

    Article  Google Scholar 

  28. Okegbile SD, Cai J, Alfa AS (2022) Performance Analysis of Blockchain-Enabled Data-Sharing Scheme in Cloud-Edge Computing-Based IoT Networks. IEEE Internet Things J 9(21):21520–21536. https://doi.org/10.1109/JIOT.2022.3181556

    Article  Google Scholar 

  29. Li M et al (2022) Cloud-Edge Collaborative Resource Allocation for Blockchain-Enabled Internet of Things: A Collective Reinforcement Learning Approach. IEEE Internet Things J 9(22):23115–23129. https://doi.org/10.1109/JIOT.2022.3185289

    Article  Google Scholar 

  30. Feng J, Yu FR, Pei Q, Chu X, Du J, Zhu L (2020) Cooperative Computation Offloading and Resource Allocation for Blockchain-Enabled Mobile-Edge Computing: A Deep Reinforcement Learning Approach. IEEE Internet Things J 7(7):6214–6228. https://doi.org/10.1109/JIOT.2019.2961707

    Article  Google Scholar 

  31. Jain DK, Tyagi SKS, Neelakandan S, Prakash M, Natrayan L (2022) Metaheuristic Optimization-Based Resource Allocation Technique for Cybertwin-Driven 6G on IoE Environment. IEEE Trans Ind Inf 18(7):4884–4892. https://doi.org/10.1109/TII.2021.3138915

    Article  Google Scholar 

  32. Zhang H, Liu X, Xu Y, Li D, Yuen C, Xue Q (2024) Partial Offloading and Resource Allocation for MEC-Assisted Vehicular Networks. IEEE Trans Veh Technol 73(1):1276–1288. https://doi.org/10.1109/TVT.2023.3306939

    Article  Google Scholar 

  33. Hu H, Wang Q, Hu RQ, Zhu H (2021) Mobility-Aware Offloading and Resource Allocation in a MEC-Enabled IoT Network With Energy Harvesting. IEEE Internet Things J 8(24) 24:17541-17556. https://doi.org/10.1109/JIOT.2021.3081983

  34. Zhao H, Deng S, Zhang C, Du W, He Q, Yin J (2019) A Mobility-Aware Cross-Edge Computation Offloading Framework for Partitionable Applications. 2019 IEEE International Conference on Web Services (ICWS), pp 193-200. https://doi.org/10.1109/ICWS.2019.00041

  35. Qiu C, Yao H, Yu FR, Jiang C, Guo S (2020) A Service-Oriented Permissioned Blockchain for the Internet of Things. IEEE Trans Serv Comput 13(2):203–215. https://doi.org/10.1109/TSC.2019.2948870

    Article  Google Scholar 

  36. Pourpanah F, Wang R, Lim CP et al (2023) A review of artificial fish swarm algorithms: Recent advances and applications. Artif Intell Rev 56(3):1867–1903. https://doi.org/10.1007/s10462-022-10214-4

    Article  Google Scholar 

  37. Xu S, Li L, Zhou Z, Mao Y, Huang J (2022) A task allocation strategy of the UAV swarm based on multi-discrete wolf pack algorithm. Appl Sci 12(3):1331. https://doi.org/10.3390/app12031331

    Article  Google Scholar 

  38. Su Q, Zhang Q, Li W, Zhang X (2024) Primal-Dual-Based Computation Offloading Method for Energy-Aware Cloud-Edge Collaboration. IEEE Trans Mob Comput 23(2):1534–1549. https://doi.org/10.1109/TMC.2023.3237938

    Article  Google Scholar 

  39. Singh S, Kim DH (2023) Joint Optimization of Computation Offloading and Resource Allocation in C-RAN With Mobile Edge Computing Using Evolutionary Algorithms. IEEE Access 11:112693–112705. https://doi.org/10.1109/ACCESS.2023.3322650

    Article  Google Scholar 

Download references

Acknowledgements

The authors express their gratitude to Beijing Information Science & Technology University for funding it through Researchers Supporting Program number (NO.2020KYNH212, NO. 2021CGZH302).

Authors' information

Shujie Tian received the B.S. degree in electronic information engineering from Beijing Information Science and Technology University in 2022. He is currently pursuing the M.S. degree in information and communication engineering from Beijing Information Science and Technology University. His research interests include wireless communications, resource allocation and blockchain.

Yuexia Zhang received her M.S. and Ph.D. degrees in information and communication engineering from Beijing University of Posts and Telecommunications in 2008. She has been a Full Professor at the School of Information and Communication Engineering of Beijing Information Science and Technology University since 2019. Her research interests include wireless cooperative communication technology, ultra-wideband technology and blockchain technology.

Yanxian Bi is currently a senior engineer in China Academy of Electronic and Information Technology, CETC Academy of Electronics and Information Technology Group Co., Ltd. He received his Ph.D. degree from the University of Beihang University in 2017. During 2015-2017, he studied in the University of Birmingham as a Visiting Scholar. His research interests focus on artificial intelligence.

Yuan Taifu, received his bachelor’s degree in Electronic Information Engineering from Liaoning University of Science and Technology in 2014, and joined IBM China Research Institute in 2016 as a software development engineer.He joined Beijing Microchip Blockchain and Edge Computing Research Institute in 2019 as a Software Development Senior Engineer. His research interests focus on blockchain.

Funding

This work was supported in part by Sub Project of National Key Research and Development plan in 2020 (NO. 2020YFC1511704), Beijing Science and Technology Project (Grant No. Z211100004421009), in part by the National Natural Science Younth Foundation of China (Grant No. 62301058).

Author information

Authors and Affiliations

Authors

Contributions

S. Tian wrote the main manuscript text and created all the figures. Y. Zhang designed the primary content of the experiments. Y. Bi provided all the support for the experimental environment. T. Yuan assisted in processing the trajectory data and helped organize the table information.All authors reviewed the manuscript.

Corresponding authors

Correspondence to Yuexia Zhang or Yanxian Bi.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tian, S., Zhang, Y., Bi, Y. et al. Blockchain-based 6G task offloading and cooperative computing resource allocation study. J Cloud Comp 13, 95 (2024). https://doi.org/10.1186/s13677-024-00655-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13677-024-00655-3

Keywords