Skip to main content

Advances, Systems and Applications

A fine-grained task scheduling mechanism for digital economy services based on intelligent edge and cloud computing

Abstract

Digital economy is regarded countries as an inevitable choice to promote economic growth and provides new opportunities and new paths for the high-quality development of economy. For the Chinese market, the strong base behind cloud computing is the unstoppable development trend of the digital economy. In digital economy, the cloud as infrastructure becomes the base of the pyramid to build the digital economy. To relieve the pressure on the servers of the digital economy and develop a reasonable scheduling scheme, this paper proposes a fine-grained task scheduling method for cloud and edge computing based on a hybrid ant colony optimization algorithm. The edge computing task scheduling problem is described, and assumptions are set to simplify the difficulty of a scheduling solution. The multi-objective function is solved by using a hybrid ant colony optimization algorithm which solves computational problems by finding the optimal solution with the help of graphs. Ant colony optimization algorithm is easy to use and effective in scheduling problems. The proposed scheduling model includes an end-device layer and an edge layer. A terminal device layer consists of devices used by the clients that may generate computationally intensive tasks and are sometime uncapable to complete the tasks. The proposed scheduling policy migrates these tasks to a suitable place where they can be completed while meeting the latency requirements. The CPUs of the idle users on the end-device layer are used for other CPU-overloaded terminals. The simulation results, in terms of energy consumptions, and task scheduling delays, show that the task scheduling performance is better under the application of this method and the obtained scheduling scheme is more reasonable.

Introduction

In the era of the digital economy, each economic entity has broken through the time and space constraints faced in the past economic development, transcending geographical location to form an efficient interconnected cloud link network, accelerating the inter-regional flow of factors, optimizing resource allocation efficiency, blurring industrial boundaries, exposing all provinces to the same development opportunities, and forming a collaborative, open and shared economic model. Due to the differences in the level of technology, policy orientation, and market foundation, there are differences in the development capacity of the digital economy among provinces, which in turn manifests itself in obvious regional differences in the level of development of the digital economy [1,2,3].

In the context of the digital economy, the peer effect, competition effect, spillover effect, and imitation effect are easily formed between regions. In this case, cross-regional factor circulation and information transmission are more convenient, and there is a strong spatial dependence between provinces and regions, which helps to narrow the regional gap in digital economy development, i.e., there may be spatial convergence characteristics of digital economy development. An accurate grasp of the current spatial and temporal distribution and spatial convergence characteristics of China's digital economy development is an important prerequisite for grasping the opportunities of the digital economy and helping to implement the strategy of coordinated regional development [4].

Intelligent edge computing and cloud computing technologies help enterprises reduce costs and improve efficiency and have many advantages such as high security. According to a survey report by the China Academy of Information and Communications Technology, intelligent edge computing and cloud computing help save costs recognized by 95% of enterprises, and about 12% of enterprises believe that intelligent edge computing and cloud computing can save more than half of the cost, and 42.6% of enterprises believe that intelligent edge computing and cloud computing have improved IT operational efficiency [5,6,7].

In the construction of smart cities, intelligent edge computing and cloud computing can coordinate massive terminals, on-demand control, and shared resources, which can precisely meet the massive electronic terminals and network data involved in the process of smart city construction. As the base platform of the digital economy, the cloud ecology is tandem bearing industrial Internet, big data, artificial intelligence and Internet of things, and other digital economy industries. As a capital refueling station for startups, investment institutions provide continuous replenishment for enterprises in the cloud ecosystem and profoundly influence the development and evolution of the industrial landscape [8].

The concept of cloud computing is not uniformly defined in academia, but there are commonalities among the definitions. In general, users are divided into two types: cloud service providers and cloud users [1, 9,10,11]. Cloud service providers are responsible for building and maintaining pools of cloud resources, and cloud users use these resources and pay certain fees. Cloud computing is characterized by large scale, easy scalability, flexible services, and openness and transparency. Cloud computing service types can be divided into three categories: foundation-as-a-service (IaaS), platform-as-a-service (PaaS), and software-as-a-service (SaaS), as shown in Fig. 1. With the development of IoT and wireless communication technologies, many emerging intelligent IoT application devices have emerged, making IoT end devices need to run and process a large amount of computation-intensive data, such as autonomous driving, face recognition and natural language processing, and other tasks with high requirements for latency and energy sensitivity.

Fig. 1
figure 1

Types of cloud computing services

The computing power, and battery life of IoT terminal devices have certain limits to effectively handle applications that require low latency and low energy consumption. At the same time, traditional cloud computing is unable to handle the explosive growth of edge data, thus mobile edge computing needs to be focused by the researchers. The concept of mobile edge computing and the related theorem is the process of migrating the tasks to be processed by IoT end devices to the edge of the mobile network and processing them through radio access points (RANs) or base stations (BSs). The RAN has strong storage and computing capabilities, which can provide services and computing capabilities for the nearest mobile IoT terminal devices, thus meeting the requirements of low latency, low energy consumption, high reliability, and continuous wide area coverage [12,13,14].

The terminal device used by the clients generate computationally intensive tasks, which cannot complete the tasks with the resources provided by the local device and need to develop a reasonable scheduling policy for them to migrate the tasks to a suitable place, which can complete the tasks while meeting the task latency requirements. Therefore, this paper combines the basic ant colony algorithm with a genetic algorithm to realize the fine-grained task scheduling of edge computing with high efficiency, to provide reference for the task scheduling problem of edge computing. The scheduling model consists of the end-device layer at the bottom and the edge layer at the top. The terminal device layer includes cell phones and tablets used by the client. The end device layer consists of idle end users that have no tasks to process. At some moments their CPUs are idle and can be used for other CPU-overloaded terminals. The edge server comes with a base station which receives the tasks sent to it, and then sends the processing results to the mobile user through the base station when the task processing is completed to finish the task scheduling.

The motivation for the subject study is that the existing state-of-the-art methods have the following shortcomings and limitations. These may include the following two fundamental drawbacks:

  1. (1)

    When scheduling fine-grained tasks, it can choose the appropriate edge node for offloading according to its own needs, but how to choose the appropriate edge node becomes a key issue when scheduling fine-grained tasks.

  2. (2)

    According to the previous scheduling scheme, the least energy consumption or the shortest delay time is selected as the target, and the edge node with the shortest distance is chosen for task scheduling. However, this scheduling scheme does not consider the resource capacity of the edge nodes, which can lead to task congestion and task scheduling failure, while at the same time, some edge server nodes may be idle in the edge network, resulting in wasted computing resources.

  3. (3)

    To address the above limitations and to relieve the pressure on the servers of the digital economy and develop a reasonable scheduling scheme, this paper proposes a fine-grained task scheduling method for cloud and edge computing based on a hybrid ant colony optimization algorithm.

The rest of the paper is organized as follows. Related work section lists the current research status of the digital economy for intelligent edge computing and cloud computing and makes a detailed analysis and summary of the research status. In the third section, the computing architecture of digital economy development based on intelligent edge computing and cloud computing is discussed. Experiments and results section discusses the experimental setup and the experiments, which validates the theoretical part and analyzes it. Conclusion and future work section presents the conclusion of the research study and the future directions.

Related work

Digital economy and the use of Edge and Cloud Computing technologies in economy have been a hot topic of research during the past decade. In this section, we summarize the current research trends in this area.

Research on the digital economy development

After the global economic crisis, the world economy entered a period of deep change and structural adjustment, and the role of the old driving force for economic growth, represented by international trade and investment, has weakened significantly, and the economic recovery is sluggish. While the traditional economy continues to be in the doldrums, the internet digital industry, mainly based on the new generation of information technology such as cloud computing, big data, and artificial intelligence, has emerged as a new driving force for economic recovery and development and a new driving force for optimizing the economic structure [15, 16]. As a new economic form that takes over the agricultural and industrial economies, the internet digital industry, with digital knowledge and information as the key production factors, not only deeply integrates with the three traditional industries and penetrates each other, but even surpasses the three above-mentioned industries in a certain sense, setting off a technological change that effectively enhances productivity and changes production methods, profoundly affecting the world economic landscape.

As a new economic form of global attention, the internet digital industry is regarded by many countries as an inevitable choice to promote growth and adjust the structure and provides a new path for the high-quality development of China's economy. However, unfortunately, although the development of the internet digital industry has been generally paid attention to by governments, and scholars and research institutions have started to conduct research on related topics and obtained some useful findings, the development path of the internet digital industry is different from any previous economic form and is disruptive to traditional economic models, relevant research is still in the exploration stage and has not yet formed a more systematic research framework. However, because the development path of the internet digital industry is different from any previous economic form and subversive to the traditional economic model, related research is still in the exploratory stage and has not yet formed a more systematic research framework, which not only lacks an in-depth discussion on the mechanism and mechanism of the internet digital industry to promote high-quality economic development but also has not yet formed a broad and unified basic concept, clear and rich interpretation of connotation and relatively clear measurement ideas, which undoubtedly seriously hinders the continuous promotion and in-depth development of related research [17,18,19,20].

Tracing the evolution process, we can find that the formation of the concept of the internet digital industry is inseparable from the promotion of information technology and the digitalization process. As early as the 1960s, the application of the information revolution in the economic field first formed the concept of the information economy, and the development of the industrial economy was dominated by traditional industries such as steel, energy, and automobiles, unlike the information economy, in which new industries such as chips, integrated circuits. In the early 1990s, thanks to the rapid development of technologies such as the Internet, databases, and multimedia, countries began to implement information superhighway programs one after another, significantly saving time and energy, increasing labor productivity, and bringing huge economic benefits. Since then, scholars have pioneered the description of the internet digital industry as an economy that uses bits rather than atoms, revealing the essential difference between the internet digital industry and the traditional economy, as well as the digital and network-based qualities of the internet digital industry. Driven by new technologies such as cloud computing, the Internet of Things, and artificial intelligence, the internet digital industry has gradually penetrated non-information industries, and its outreach has continued to expand to social media and search engines, and the development of the internet digital industry has entered a golden age.

In this context, many governments have formulated strategic plans for infrastructure construction, application model innovation, and integration of the internet digital industry with the traditional economy. Scholars and research institutions have also conducted some research based on the development of the internet digital industry and tried to define the concept of the internet digital industry and explain the connotation of the internet digital industry. The internet digital industry is a new and special economic form that is distinctly different from agricultural and industrial economies and is the sum of a series of digital-based economic activities oriented to the optimization of resource allocation. From this perspective, the internet digital industry is the total economic output brought by all kinds of digital inputs, including digital skills, digital equipment such as hardware, software, and communication, and other digital intermediate products and services used in the production process, which have a positive effect on the improvement of economic efficiency. Like the steam engine and electrical technology revolutions that gave birth to the industrial economy, the internet digital industry is an inevitable product of the information technology revolution and the networking of the economy and society through digital technologies [21].

The above analysis shows that the internet digital industry is a series of economic activities based on the new generation of information and communication technology, modern information networks as an important carrier, and digital knowledge and information including digital technology, digital equipment, and other digital intermediate goods and services used in the production chain as production factors to achieve production and consumption, cooperation and exchange, and social governance.

In fact, with the rapid development of technologies such as the Internet of Things, artificial intelligence, and cloud computing. The internet digital industry includes not only the information industry, but also traditional industries or fields such as logistics and transportation, and industrial control, which are undergoing digital transformation, and are constantly changing the operation mode of the economy and society through the integration of technology, industry and producers and consumers. Therefore, the internet digital industry is not only an inevitable product of the deep development and comprehensive application of information and communication technology but also a new economic form after the agricultural and industrial economies, as well as a new driving force for economic growth [22].

Intelligent edge and cloud computing

Many organizations and scholars have discussed cloud computing from different perspectives. Foster, the father of grid computing, sees cloud computing as a computing aggregation using distributed computing, driven by economies of scale, to provide users with a dynamically scalable resource management platform and services via the Internet. The National Institute of Standards and Technology consider cloud computing as a computing model of resource sharing through the network, where CPU, storage, and application services are resources that exist in a dynamic resource pool, with minimal overhead for rapid access, use, and release of resources [23]. IBM proposes that cloud computing is a platform or an application, and server types can be divided into real physical servers Users can access the cloud platform anytime and anywhere, and the platform implements a dynamic configuration of cloud resources so that resources in the cloud can be stored and used on demand to achieve efficient utilization of resources. The China Institute of Electronic Technology Standardization believes that cloud computing is a computing model that flexibly supplies and manages pools of physical and virtual machine resources, and users use the network to obtain scalable and shared resources according to their own needs.

Infrastructure-as-a-service is located at the bottom layer. As shown in Fig. 2, IaaS provides basic hardware resources such as CPU and storage to users as a service to support computing and storage. Users can develop and deploy operating systems and applications without purchasing hardware equipment such as servers, storage devices, and network devices. Users can obtain services through the network by paying certain fees based on the amount and time of resource usage, which greatly saves users' costs. At present, IaaS mainly provides service support for PaaS and SaaS and provides an environment for companies with hardware needs. These companies can choose different hardware configurations to develop their own applications through IaaS according to their business needs. At present, the mainstream IaaS platforms include OpenStack, Cloud-Stack, and Amazon Elastic Cloud [24,25,26].

Fig. 2
figure 2

Structure of IaaS (Infra-structure as a Service)

Platform-as-a-Service PaaS is in the second layer, and its service is like the operating system on the hardware. It provides platform environment support for the development, testing, and operation of user applications, and realizing the customization of the application environment. The configuration and maintenance of the development environment are handed over to the platform service provider, which greatly improves the efficiency of developers [27, 28]. At present, the mainstream PaaS platforms include Google App Engine, Microsoft Windows, Azure, and Cloud-Foundry. Figure 3 shows the PaaS structure of Cloud-Foundry.

Fig. 3
figure 3

Structure of PaaS (Platform as a Service)

With the popularity of IoT, edge computing has developed rapidly, and intelligent algorithms represented by deep learning are increasingly deployed to edge computing devices, constituting an important end-side computing power for the Internet of Everything. However, the network structure of deep learning algorithms is highly variable, the number of layers and parameters are unusually large, and the demand for computing power resources is increasingly complex and diverse. Edge intelligent computing faces many practical challenges in terms of computing power, power consumption, size, and cost, the most important of which is the conflict between computing power and power consumption. Intelligent computing has not only been used in digital economy, but widely been used since long for different types of intelligent services generation in the fields of wellbeing [29] and sustainable environment [24]. During the recent years, a series of network light-weighting techniques such as network pruning, parameter quantization, and model distillation have emerged in the study of edge intelligent computing applied to practical scenarios, while also relying on hardware acceleration techniques to achieve a balance between arithmetic power and power consumption. Hardware adaptation is a representative implementation means among hardware acceleration techniques, among which hardware implementation of heterogeneous parallel computing platforms has received wide attention [30].

In order to solve the contradictory problem of arithmetic power and power consumption in edge intelligent computing and to cope with the complex and diversified demand for arithmetic resources in the edge-side implementation of intelligent algorithms represented by deep learning, traditional computing platforms have been optimized in terms of instruction model, communication mechanism and storage system to improve the computing speed, reduce energy loss and enhance the adaptability to deep learning algorithms. However, compared with traditional computing platforms, heterogeneous parallel computing platforms not only need to continuously improve instruction and data parallelism and code density in the instruction model; increase data transfer rate and reuse rate in communication mechanism; improve access speed, reduce access delay reduction, and access power in the storage system, but also present a series of new problems and challenges.

Currently, although processors of different architectures have been integrated into the same chip, the instruction models they use are still different, and the compatibility of execution models has not been fundamentally solved, making it difficult to manage tasks during heterogeneous parallel computing and making development more difficult. The development of dedicated instruction sets helps to solve this problem, but their generality is not high, and the application scope is limited. To design a unified and common instruction model to realize the deep integration of on-chip heterogeneous processors, provides support for heterogeneous core task management, and provides further development convenience for R&D personnel is a major challenge for heterogeneous parallel computing platforms in technology development. Deep learning-related algorithms are mostly data-intensive, requiring large access bandwidth and small latency, otherwise, they are prone to the problem of storage walls. Edge computing, however, is severely constrained by power consumption, which puts higher requirements on the on-chip communication mechanism, that is, the data communication mechanism must be light enough.

The fundamental and key issues incurred by the existing methods include the following. First, how one can choose the appropriate edge node for offloading according to one’s own needs. Second, these scheduling schemes do not consider the resource capacity of the edge nodes, which could lead to task congestion and task scheduling failure. Third, wastage of computing resources due to idleness of some of the edge server nodes in the edge network. Other issues include the difficulty of managing tasks during heterogeneous parallel computing and making development more difficult, and to be able to design a unified and common instruction model to realize the deep integration of on-chip heterogeneous processors etc. The current study focuses on the solutions to these problems.

Methods

Edge computing task scheduling refers to reasonably assigning tasks that should originally be handled by local servers or central clouds to server nodes at the edge of the network to relieve the task processing pressure on local servers or central clouds and reduce task processing latency. Mobile edge computing is a new mobile communication network technology that provides a network environment at the edge of the mobile network to cache many tasks from the network. The technology can handle more complex network edge tasks, and mobile edge computing has a better task computing capability to analyze and process tasks. This section presents the architecture of the Fine-grained task scheduling model proposed in this paper.

Overall architecture

The scheduling model is divided into the end-device layer at the bottom and the edge layer at the top. The terminal device layer mainly includes cell phones and tablets used by the client. Some of them generate computationally intensive tasks, and are incapable to complete the tasks with the resources provided by the local device. They need to have a reasonable scheduling policy to migrate the tasks to a suitable place, where they can be completed while meeting the task latency requirements. Another part of the devices in the end device layer consists of idle end users that have no tasks to process. At some moments their CPUs are idle and can be used for other CPU-overloaded terminals. The overall composition structure based on intelligent edge computing and cloud computing used in this paper is shown in Fig. 4. Each edge server comes with a base station whose role is to receive the tasks sent to it, and then send the processing results to the mobile user through the base station when the task processing is completed.

Fig. 4
figure 4

Architecture of the proposed model

Fine-grained task scheduling algorithm for digital economy services

The granularity refers to the criterion of differentiation according to the detailed degree of project module division. The more an edge computing task scheduling module is divided, the smaller each sub-module is, and the more detailed is the work it is responsible for, i.e., the finer the granularity it belongs to. So the difficulty and energy consumption in scheduling fine-grained tasks are higher as compared to coarse-grained task scheduling. When scheduling fine-grained tasks, it can choose the appropriate edge node for offloading according to its own needs, but choosing the appropriate edge node becomes a key issue when scheduling fine-grained tasks. According to the previous scheduling scheme, the least energy consumption or the shortest delay time is selected as the target, and the edge node with the shortest distance is chosen for task scheduling. However, this scheduling scheme does not consider the resource capacity of the edge nodes, which can lead to task congestion and task scheduling failure, while at the same time, some edge server nodes may be idle in the edge network, resulting in wastage of computing resources.

To address the above issues, several assumptions need to be made before task resource scheduling. Assumption 1: Only one edge node can be considered as the scheduling object for task scheduling; Assumption 2: There is energy exhaustion of computing resources at the edge nodes; Assumption 3: The topology of the edge network is known and the distance between each server node in the topology and the local server and the central cloud is known; Assumption 4: The demand for computing resources for each task is known and varies; Assumption Condition 5: Several virtual machines inside the edge node work in parallel mode; Assumption 6: The task scheduling process is executed continuously without intermittent execution; Assumption 7: The information of the task transmission channel is known.

The task scheduler can obtain the priority of task scheduling, i.e., it completes the task scheduling order design work. The task scheduler sorts the tasks by calculating the priority index of the tasks, and then sorts the tasks to be scheduled in the order from the largest to the smallest. The priority index calculation formula is as follows.

$$\begin{array}{c}{S}_{i}={A}_{i}-\left({A}_{i}-{d}_{i}\right)\cdot \alpha \\ =\frac{{t}_{i}}{{b}_{i}-{c}_{i}}-\left(\frac{{t}_{i}}{{b}_{i}-{c}_{i}}-\frac{{w}_{i}}{W}\right)\cdot n\end{array}$$
(1)

where \({S}_{i}\) represents the priority index of fine-grained task \(i\); \({A}_{i}\) represents the saturation degree of fine-grained task \(i\); \({d}_{i}\) represents the relative weight ratio; \(\alpha\) represents the balance index; \({b}_{i}\) represents the deadline moment of fine-grained task \(i\) execution; ci represents the start moment of fine-grained task \(i\) execution; \({w}_{i}\) represents the value of fine-grained task \(i\) execution; \(W\) represents the total value of fine-grained task; \({t}_{i}\) represents the maximum elapsed time; n represents the number of fine-grained tasks. Based on the calculated task priority index, a task scheduling sequence scheme is designed. Fine-grained task scheduling aims to solve two problems: the energy required for task scheduling (energy consumption) and the time required for task scheduling (time delay). For these two problems, the objective function set in this study is a multi-objective function. The function is described as follows.

$$\begin{array}{c}minY={y}_{1}\cup {y}_{2}\\ {y}_{1}={Q}_{i}(1)-{P}_{i}\left[{Q}_{i}(1)-\sum_{j=1}^{m} {q}_{ij}{Q}_{i}(2)\right]\\ {y}_{2}={T}_{i}(1)-{P}_{i}\left[{T}_{i}(1)-\sum_{j=1}^{m} {q}_{ij}{T}_{i}(2)\right]\end{array}$$
(2)

where \(minY\) represents the integrated objective minimum; \({y}_{1}\) represents the fine-grained task scheduling energy consumption; \({y}_{2}\) represents the fine-grained task scheduling delay; \({Q}_{i}\) represents the energy consumption and delay when task \(i\) is processed locally; \({P}_{i}\) represents the scheduling decision factor, when equal to 0, task \(i\) is considered to be processed locally, and when equal to 1, task \(i\) performs scheduling processing; \({q}_{ij}\) represents the edge server assignment factor, when equal to 0, task \(i\) is not assigned to edge server j for scheduling, when equal to 1, task \(i\) is assigned to edge server j for scheduling. \({q}_{ij}\), \({T}_{i}\) represent the energy consumption, latency of task \(i\) processing on edge nodes the energy consumption and time delay; m represents the number of edge servers.

The Ant Colony Optimization Algorithm (ACO) performs the edge computing fine-grained task scheduling merit-seeking procedure. ACO uses artificial ant, an agent that searches for optimal solutions to a problem. The different solutions generated are compared and the procedure is repeated until an optimal solution is found. The Ant Colony Optimization Algorithm in the general form is given in Algorithm 1.

figure a

Algorithm 1.

In our proposed approach, the Ant Colony Algorithm will work as described in the steps given below in Algorithm 2.

figure b

Algorithm 2.

Experiments and results

The proposed method was simulated, its performance was evaluated and compared with other methods. This section summarizes the experimental setup and the results.

Experiment setup

To facilitate the comparison and analysis of simulation-based experiments, the following comparison scheme is set up to compare and analyze the proposed algorithm with various techniques such as simulated annealing particle swarm algorithm (PSOSA), discrete binary particle swarm algorithm (BPSO), simulated annealing algorithm (SA), all offload (OFFLOAD), and all local (LOCAL).

Scheduling simulation tests were performed in a workstation with Intel Core i7CPU@2.80 GHz, NVIDIA GeForce GTX1050Ti and 8 GB RAM. The parameters and configurations for performing experiments to evaluate performance of the proposed method, are shown in Table 1. The application scenario for this test is a digital economy service center in a region. The use of edge computing enables the server to complete the entire monitoring process of video stream acquisition, compression, storage, detection, display, and final control at the edge of the network, while solving the problems of discontinuity caused by excessive pressure on the core network that prevents timely forwarding of them. Considering that mobility is an inherent property of edge computing servers, when users switch between small areas, it may lead to server switching, and there are differences in the attributes and configurations of different servers, so this paper realizes mobility management through the cooperation of edge computing system and attributed location register.

Table 1 Simulation test parameters

According to the core industry classification criteria, the output of the core industry is calculated by industry, and then the output of each industry is summed up, which is the total output of the core industry of digital industry; the added value of the core industry of digital industry is calculated by industry, and then the added value of each industry is summed up, which is the added value of the core industry of digital industry. First, calculate the total output of digital industry in industry \(i\). Assume that the proportion of the total output of digital industry in industry \(i\) to the operating income of digital industry in it is equal to the proportion of its total output to its total operating income. Use the proportion of the operating income of digital industry in industry to the total operating income to realize the transformation of the total output of industry to the total output of digital industry in industry, as shown in the following equation:

$$G{O}_{it}^{d}=G{O}_{it}\cdot \frac{{S}_{it}^{d}}{{S}_{it}}$$
(3)

where \({S}_{it}^{d}\) is the digital industry operating income of industry \(i\) in year \(t\), \({S}_{it}\) is the total operating income of industry \(i\) in year \(t\), \(G{O}_{it}\) is the total output of industry \(i\) in year \(t\), \(G{O}_{it}^{d}\) is the total output of industry \(i\) in year \(t\), and \(\frac{{S}_{it}^{d}}{{S}_{it}}\) is the contribution rate of industry digital industry. Next, the value-added rate of industry \(i\) digital industry is calculated. Assume that the industry digital industry intermediate consumption rate is equal to the industry intermediate consumption rate, i.e., the value-added rate of industry \(i\) digital industry is equal to the value-added rate of industry \(i\):

$$VA{R}_{it}^{d}=VA{R}_{it}$$
(4)

where \(VA{R}_{it}^{d}\) is the value-added rate of industry \(i\) digital industry in year \(t\), and \(VA{R}_{it}\) is the value-added rate of industry \(i\) in year \(t\). Again, value added of digital industry in industry \(i\) is calculated. Using the total output of digital industry in industry i and the value-added rate of digital industry in industry \(i\) multiplied together to calculate the value added of digital industry in industry \(i\), see equation:

$$V{A}_{it}^{d}=G{O}_{it}^{d}\cdot VA{R}_{it}^{d}$$
(5)

where \(V{A}_{it}^{d}\) is the value added of industry's digital industry in year \(t\). Finally, the total value added of the core industries in the digital industry is calculated. The total value added of digital industry in each industry is summed to get the total value added of core industry in digital industry in each year, see equation:

$$V{A}_{t}^{d}=\sum_{i} V{A}_{it}^{d}$$
(6)

where \(V{A}_{t}^{d}\) is the value added of the digital industry core industry in year \(t\). The hybrid ant colony algorithm proposed in the article is used to schedule 1000 fine-grained tasks to test their convergence performance. We use value of fitness function to determine how effective is the proposed approach. The fitness function takes a solution as input and outputs how close is the solution to the optimal solution. The faster the value of the fitness function reaches the lowest point, the better the performance of the algorithm. Figure 5 (a) shows the value of the fitness function which quickly reaches 0. This indicates the effectiveness of the algorithm as shown in Fig. 5 (b).

Fig. 5
figure 5

Algorithm convergence speed test

The value of the fitness function of this method reaches the lowest point when the number of iterations is 200, i.e., this method achieves convergence at this time. The effectiveness of the method improves further when the number of iterations is increased.

Experimental results

Under the same simulation test conditions, different experiments were carried out using the improved NSGA-II-based scheduling method, the adaptive genetic algorithm-based scheduling method, the DRL-based scheduling method, and the fireworks model-based scheduling method, and compared with the hybrid ant colony optimization algorithm proposed in this study. The five task scheduling schemes are simulated and run simultaneously, and 100 tasks are scheduled to 10 edge server nodes, and their energy consumption of scheduling and the time delay are compared. Energy consumption of an algorithm refers to how many CPU cycles it runs and how much power does it consume. Today, in data processing systems, organizations are interested in energy efficient algorithms because the data volumes are growing and data processing is becoming more and more complex. We also use time delay or latency (in seconds) of the fine-grained task scheduling to determine the performance of proposed approach.

The results are shown in Figs. 6 and 7. The highest scheduling energy consumption is 150kWh at the base station bandwidth of 1.4 kHz. The reason for the lower scheduling energy consumption is that the hybrid ant colony algorithm is used to select the edge server node for the next task scheduling according to the transfer probability during the scheduling process, which reduces the influence of interference factors when selecting the bandwidth. As can be seen from Fig. 6, the fine-grained task scheduling delay of the proposed method is smaller when the forward link rate is the same, and the highest delay is 5 s. The main reason is that this paper's method optimizes the ant colony algorithm using genetic algorithm, which improves the scheduling performance of the algorithm and avoids falling into local optimum. Therefore, it can be seen in Figs. 6 and 7 that the energy consumption and latency of the hybrid ant colony task scheduling are smaller compared with the other four scheduling methods, which also indicates that the scheduling method of this paper performs better, and the resulting scheduling solution is more reasonable.

Fig. 6
figure 6

Energy consumption of the proposed method

Fig. 7
figure 7

Task scheduling delay of the proposed method

Conclusion and future work

The digital economy in the twenty-first century has a great role in the healthy development of economy of a country. Modern day technologies such as cloud computing, big data, and blockchain are continuously strengthening policy, research and development. By promoting the integration between the digital economy and the real economy, China can significantly grow its economy and can compete the largest economies of the world. To improve task scheduling in the intelligent edge infrastructure and cloud computing, in this paper we proposed a fine-grained task scheduling technology, combining genetic algorithm and the ant colony algorithm to achieve fine-grained task scheduling for edge computing through a hybrid algorithm. The simulation test results prove its effectiveness, and the derived scheduling scheme is more reasonable and can be used for digital economy real-time services and development prediction.

However, in the current study, the proposed system has been tested in environment with no large volume of data. We plan to extend our work by using large scale data and big data analytics technologies. We plan to analyze different types of task scheduling and task offloading to improve the comprehensive performance of the proposed method. Scheduling in the edge computing platform is challenging due to the fact that different service providers such as network, resource, and storage have competing goals. Moreover, the user costs along with the providers’ profits should be taken into account when running different applications in the edge platform. In the future, we will consider these competing goals while proposing new scheduling methods.

Availability of data and materials

Not applicable.

References

  1. Sturgeon TJ (2021) Upgrading strategies for the digital economy. Glob Strateg J 11(1):34–57

    Article  Google Scholar 

  2. Siew M, Cai D, Li L, Quek TQ (2020) Dynamic pricing for resource-quota sharing in multi-access edge computing. IEEE Trans Netw Sci Eng 7(4):2901–2912

    Article  MathSciNet  Google Scholar 

  3. Xu X, Shen B, Ding S, Srivastava G, Bilal M, Khosravi MR, Wang M (2020) Service offloading with deep Q-network for digital twinning-empowered internet of vehicles in edge computing. IEEE Trans Industr Inf 18(2):1414–1423

    Article  Google Scholar 

  4. Al-Ansi A, Al-Ansi AM, Muthanna A, Elgendy IA, Koucheryavy A (2021) Survey on intelligence edge computing in 6G: characteristics, challenges, potential use cases, and market drivers. Future Internet 13(5):118

    Article  Google Scholar 

  5. Varghese B, De Lara E, Ding AY, Hong CH, Bonomi F, Dustdar S, Willis P (2021) Revisiting the arguments for edge computing research. IEEE Internet Comput 25(5):36–42

    Article  Google Scholar 

  6. Qi Q, Tao F (2019) A smart manufacturing service system based on edge computing, fog computing, and cloud computing. IEEE Access 7:86769–86777

    Article  Google Scholar 

  7. Li K, Kim DJ, Lang KR, Kauffman RJ, Naldi M (2020) How should we understand the digital economy in Asia? Critical assessment and research agenda. Electron Commer Res Appl 44:101004

    Article  Google Scholar 

  8. Viriyasitavat W, Da Xu L, Bi Z, Pungpapong V (2019) Blockchain and internet of things for modern business process in digital economy—the state of the art. IEEE Trans Comput Soc Syst 6(6):1420–1432

    Article  Google Scholar 

  9. Litvinenko VS (2020) Digital economy as a factor in the technological development of the mineral sector. Nat Resour Res 29(3):1521–1541

    Article  Google Scholar 

  10. Zekos G (2005) Foreign direct investment in a digital economy. Eur Bus Rev 17(1):52–68

    Article  Google Scholar 

  11. Pan W, Xie T, Wang Z, Ma L (2022) Digital economy: an innovation driver for total factor productivity. J Bus Res 139:303–311

    Article  Google Scholar 

  12. Banalieva ER, Dhanaraj C (2019) Internalization theory for the digital economy. J Int Bus Stud 50(8):1372–1387

    Article  Google Scholar 

  13. Pei J, Zhong K, Li J, et al (2022) PAC: Partial Area Clustering for re-adjusting the layout of traffic stations in city’s public transport. IEEE Trans Intell Transportation Syst 1

  14. Kumar M, Sharma SC, Goel A, Singh SP (2019) A comprehensive survey for scheduling techniques in cloud computing. J Netw Comput Appl 143:1–33

    Article  Google Scholar 

  15. Arunarani AR, Manjula D, Sugumaran V (2019) Task scheduling techniques in cloud computing: a literature survey. Futur Gener Comput Syst 91:407–415

    Article  Google Scholar 

  16. Mittal S, Katal A (2016) An optimized task scheduling algorithm in cloud computing. In 2016 IEEE 6th international conference on advanced computing (IACC) (pp. 197–202)

  17. Houssein EH, Gad AG, Wazery YM, Suganthan PN (2021) Task scheduling in cloud computing based on meta-heuristics: review, taxonomy, open challenges, and future trends. Swarm Evol Comput 62:100841

    Article  Google Scholar 

  18. Tong Z, Chen H, Deng X, Li K, Li K (2020) A scheduling scheme in the cloud computing environment using deep Q-learning. Inf Sci 512:1170–1191

    Article  Google Scholar 

  19. Shukri SE, Al-Sayyed R, Hudaib A, Mirjalili S (2021) Enhanced multi-verse optimizer for task scheduling in cloud computing environments. Expert Syst Appl 168:114230

    Article  Google Scholar 

  20. Saeedi S, Khorsand R, Bidgoli SG, Ramezanpour M (2020) Improved many-objective particle swarm optimization algorithm for scientific workflow scheduling in cloud computing. Comput Ind Eng 147:106649

    Article  Google Scholar 

  21. Ismayilov G, Topcuoglu HR (2020) Neural network based multi-objective evolutionary algorithm for dynamic workflow scheduling in cloud computing. Futur Gener Comput Syst 102:307–322

    Article  Google Scholar 

  22. Bittencourt LF, Goldman A, Madeira ER, da Fonseca NL, Sakellariou R (2018) Scheduling in distributed systems: a cloud computing perspective. Computer science review 30:31–54

    Article  Google Scholar 

  23. Velliangiri S, Karthikeyan P, Xavier VA, Baswaraj D (2021) Hybrid electro search with genetic algorithm for task scheduling in cloud computing. Ain Shams Engineering Journal 12(1):631–639

    Article  Google Scholar 

  24. Abualigah L, Diabat A (2021) A novel hybrid antlion optimization algorithm for multi-objective task scheduling problems in cloud computing environments. Clust Comput 24(1):205–223

    Article  Google Scholar 

  25. Strumberger I, Bacanin N, Tuba M, Tuba E (2019) Resource scheduling in cloud computing based on a hybridized whale optimization algorithm. Appl Sci 9(22):4893

    Article  Google Scholar 

  26. Hussain M, Wei LF, Lakhan A, Wali S, Ali S, Hussain A (2021) Energy and performance-efficient task scheduling in heterogeneous virtualized cloud computing. Sustainable Comput Inform Syst 30:100517

    Article  Google Scholar 

  27. Yahia HS, Zeebaree SR, Sadeeq MA, Salim NO, Kak SF, Adel AZ, Hussein HA (2021) Comprehensive survey for cloud computing based nature-inspired algorithms optimization scheduling. Asian J Res Comput Sci 8(2):1–16

    Article  Google Scholar 

  28. Aburukba RO, AliKarrar M, Landolsi T, El-Fakih K (2020) Scheduling internet of things requests to minimize latency in hybrid fog–cloud​ computing. Futur Gener Comput Syst 111:539–551

    Article  Google Scholar 

  29. Ali R, Afzal M, Sadiq M, Hussain M, Ali T, Lee S, Khattak AM (2018) "Knowledge-based reasoning and recommendation framework for intelligent decision making. Expert Syst 35(2):e12242

    Article  Google Scholar 

  30. Abd Elaziz M, Xiong S, Jayasena KPN, Li L (2019) Task scheduling in cloud computing based on hybrid moth search algorithm and differential evolution. Knowl-Based Syst 169:39–52

    Article  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

Xiaoming Zhang:Investigation, conceptualization, methodology, funding acquisition, resources, software, validation, formal analysis, writing—review and editing. The author(s) read and approved the final manuscript.

Corresponding author

Correspondence to Xiaoming Zhang.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, X. A fine-grained task scheduling mechanism for digital economy services based on intelligent edge and cloud computing. J Cloud Comp 12, 30 (2023). https://doi.org/10.1186/s13677-023-00402-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13677-023-00402-0

Keywords