Skip to main content

Advances, Systems and Applications

DSTS: A hybrid optimal and deep learning for dynamic scalable task scheduling on container cloud environment

Abstract

Containers have grown into the most dependable and lightweight virtualization platform for delivering cloud services, offering flexible sorting, portability, and scalability. In cloud container services, planner components play a critical role. This enhances cloud resource workloads and diversity performance while lowering costs. We present hybrid optimum and deep learning approach for dynamic scalable task scheduling (DSTS) in container cloud environment in this research. To expand containers virtual resources, we first offer a modified multi-swarm coyote optimization (MMCO) method, which improves customer service level agreements. Then, to assure priority-based scheduling, we create a modified pigeon-inspired optimization (MPIO) method for task clustering and a rapid adaptive feedback recurrent neural network (FARNN) for pre-virtual CPU allocation. Meanwhile, the task load monitoring system is built on a deep convolutional neural network (DCNN), which allows for dynamic priority-based scheduling. Finally, the presentation of the planned DSTS methodology will be estimated utilizing various test vectors, and the results will be associated to present state-of-the-art techniques.

Introduction

Cloud computing, which provides the computer services required for the Internet, has become one of the most popular technologies for the economy, society, and people in latest years [1]. Due to the recent growth in the load of different and sophisticated clouds like the Internet of Things (IoT) devices, machine learning programmes, coursing A/V services, and cloud memory, mandate for several cloud amenities has risen substantially [2]. With the introduction of numerous virtualization technologies like as VMware, Citrix, KVM, and Zen [3], the cloud computing business has evolved fast in recent years. Despite their widespread use, virtualization technologies have a number of drawbacks, including high time consumption, extended runs and shutdowns, and difficult planning and migration procedures [4]. The hardware is virtualized in the conventional setup, and each virtual machine running the whole operating system supervises the computer’s application activities [5]. The application process in the container communicates directly with the host kernel, but the container does not have its own kernel or hardware virtualization. Containers are therefore far lighter than typical virtual computers [6, 7].

Furthermore, the spread of microservices, self-driving vehicles, and smart infrastructure is predicted to boost cloud service growth [8]. The backbone of cloud computing is virtualization technology, which enables applications to be detached from fundamental infrastructure by sharing resources and executing various programmes independently [9]. Containers have grown in popularity as a novel virtualization approach in recent years, bringing conventional fundamental machines (VMs) to numerous auspicious characteristics including united host operating systems, quicker boot times, portability, scalability, and faster deployment [10]. Containers allow apps to store all of their dependencies in the sandbox, allowing them to construct autonomous working hours from the platform while also increasing productivity and portability [11]. Dockers, LXC, and Kubernetes are just a few of the container technologies available. Furthermore, several cloud service providers run containers on virtual machines (VMs) to increase container seclusion, performance, and system management [12, 13]. Container technology is gaining traction among developers, and it’s now being used to deploy a wide range of microservices and applications, including smart devices, IoT, and fog / edge computing [14]. As a consequence, to fulfil the increased demand, numerous cloud service suppliers have begun to provide container-based cloud services. Google Container Engine, Amazon Re-Container Service, and Azure Container Service are other examples. The cloud computing paradigm is being revolutionised by container technology [15]. Running containerized applications, in the eyes of the cloud service provider, produces a compression layer that deals with cluster management. The primary container orchestration sites in the base cluster for automating, measuring, and controlling container-based infrastructure are Docker Swarm and Google Kubernetes [16, 17]. A container cluster’s overall structure comprises of management nodes and task nodes. The cluster and container node work nodes, on the other hand, are the responsibility of the management nodes [18]. In addition, the manager keeps track of the cluster’s location by verifying the node’s position on a regular basis. The planning components, which are responsible for spreading loads among cluster nodes and controlling the container life process [19], play a precarious part in container transposition. Depending on the technology, container planning may take many different shapes. As a result, the primary goal of container planning is to get the containers started on the ideal host and link them together [20].

Our contributions

A dynamic scalable task scheduling (DSTS) approach is offered for cloud container environments as a way to improve things even further. The main contributions of our proposed DSTS approach are given as follows:

  1. 1.

    To provide a dynamic scalable task scheduling system for container cloud environments in order to reduce the make span while using less computing resources and containers than current algorithms.

  2. 2.

    To offer a unique clustered priority-based task scheduling technique that improves the scheduling system’s flexibility to cloud environment while also speeding convergence.

  3. 3.

    Create a task load monitoring system that allows for dynamic scheduling depending on priority.

  4. 4.

    Using various test scenarios and metrics, assess the performance of the suggested dynamic scalable task scheduling.

The balance of the paper is placed as proceeds: The second segment summarises recent work on job scheduling for cloud containers. We go through the issue technique and system design in Problem methodology and system design section. The suggested dynamic scalable task scheduling (DSTS) model’s functioning function is designated in Proposed methodology section. Simulation results and analysis section deliberates the simulation findings and comparison analyses. Finally, Conclusion section brings the paper to a close.

Related works

Many studies for scalable task scheduling for cloud containers have been suggested in recent years all around the globe. Table 1 summarises and tabulates the literature with research gaps in many categories.

Table 1 Summary of research gaps

Zhao et al. [21] studied to improve today’s cloud services by reviewing the workings of projects for planning next-generation containers. In particular, this work creates and analyzes a new model that respects both workload balance and performance. Unlike previous studies, the model uses statistical techniques to create confusion between load balance and utility performance in a single optimization problem and solve it effectively. The difficult element is that certain sub-issues are more complicated, necessitating the use of heuristic guidance. Liu et al. [22] suggested a multi-objective container scheduling technique based on CPU node consumption, memory usage across all nodes, time to transport pictures over the network, container-node connections, and container clustering, all of which impact container programme performance. The author provides the metric techniques for all the important components, sets the relevant qualifying functions, and then combines them in order to pick the suitable nodes for the layout of the containers to be allotted in the planning process. Lin et al. [23] suggested a multi-objective optimization model for container-based micro service planning that uses an ant colony method to tackle the issue. The method takes into account not only the physics nodes’ use of computer and storing possessions, but also the numeral of multi-objective requirements and the loss rate of physics nodes. These approaches make use of prospective algorithms’ quality assessment skills to assure the correctness of pheromone updates and to increase the likelihood of utilising multifunctional horistic information to choose the optimum route. Adhikari et al. [24] suggested an energy-efficient container-based scheduling (EECS) technique for fast inheritance of various IoT and non-IoT chores. To determine the optimum container for each work, an accelerated particle swarm optimization (APSO) method with minimum latency is applied. Another significant duty in the cloud environment is resource planning in order to make the greatest use of resources on cloud servers. Ranjan et al. [25] shown how to design energy-efficient operations in program-limited data centres using container-based virtualization. Policies Containers provide users the freedom to get vital resources that are suited to their own need.

Chen et al. [26] suggested a functional restructuring system to control the operating sequence of each container in order to achieve maximum performance gain, as well as an adaptive fair-sharing system to effectively share the container-based virtualized environment. They also suggested a checkpoint-based system, which would be particularly useful for load balancing. Hu et al. [27] suggested the ECSched improved container scheduler for planning simultaneous requests over several clusters with varied resource restrictions. Define a container planning issue as a minimal cost flow (MCFP) problem and communicate container needs utilizing a specialised graphical data format. ECSched allows you to design a flow network based on a set of needs while also allowing MCFP algorithms to plan fixed requests live. Evaluate ECSched in a variety of test clusters and run large-scale planning overhead simulations to see how it performs. Experiments demonstrate that ECSched is superior at container planning in terms of container function and resource performance, and that large clusters only introduce minor and acceptable planning overlays.

For the VAS operating system, Rajasekar et al. [28] provided a planning and resource strategy. Infrastructure (IaaS) suppliers provide computer, networking, and storage services. As a result, the VAS design may effectively plan this burden at important periods utilising a range of features and quality of service (QoS). The method is scalable and dynamic, altering the load and base as needed. KCSS is a Kubernetes Container Scheduling Strategy introduced by Menouer et al. [29]. To satisfy the demands of Maxpania and Cloud providers, KCSS intends to optimise the scheduling of many containers that users submit to the Internet in order to increase customer performance based on energy usage. Due to the table’s cloud infrastructure level and restricted perspective of user demands, single-based planning is less efficient. KCSS is responsible for introducing multi-criterion node selection. A cache-aware scheduling approach based on neighbourhood search was suggested by Li et al. [30]. Job categorization, node resource allocation, node clustering, and cache target planning are the four sub-issues of this paradigm. It’s separated into three sorts, and then various resources are transferred to the node depending on how well it performs. The work is stored late after the nodes with comparable functions are assembled. Ahmad et al. [31] looked at a variety of current container planning approaches in order to continue their study in this hot topic. The research is based on mathematical modelling, heuristics, Meta heuristics, and machine learning, and it divides planning approaches into four groups depend upon the algorithm of optimization used to construct the map. Formerly, based on performance measurements, examine and identify important benefits and difficulties for each class of planning approach, as well as main hardware issues. Finally, this study discusses how successful research might improve the future potential of innovative container technologies. The container planning strategy provided by Rausch et al. [32] helps to make good use of the margin infrastructure on these sites. They’ll also illustrate how to modify the weight of scheduling controls automatically to optimise high-level performance objectives like task execution time, connection use, and cloud performance costs. Implement a Kubernetes container orchestration system prototype and install bridges on the edges where it was constructed. Utilizing hints given by the test’s frequent loads, evaluate the system using micro-organized simulations in different infrastructure situations.

Problem methodology and system design

Problem statement

  • Learning automata are used to suggest a self-accommodating duty scheduling algorithm (ADATSA) [33]. In conjunction through the futile formal of resources and the in succession stage of responsibilities in the present surroundings, the algorithm efficiently leveraged the re-enforcement educating capacity of learning mechanisms and achieves an operative remuneration-fine system for arranging activities. A charge load observing framework for actual-time observing of the surrounding and planning assessment opinion, as well as the establishment of a buffer queue for priority scheduling. To compare the non-automata technology-based algorithm PSOS, the ADATSA algorithm to learning automata-based algorithm LAEAS, and the K8S planning engine relating resource imbalance, resource residual degree, and QoS, researchers used the Kubernetes platform to pretend various planning circumstances.

  • In general, cloud computing environments need great portability, and containerisation assures surroundings compatibility by en-capsulation uses collected with their libraries, configuration files, and other needs, allowing consumers [34] to quickly migrate and set up programmes across gatherings.

  • However, there are still certain obstacles to be solved in this project. Furthermore, the study literature [21,22,23,24,25,26,27,28,29,30,31,32,33, 35] lacks methods and models that enable dynamic scalability, in which consumers get QoS and good performance [36] while using the fewest amount of cloud resources possible, particularly for containerized services hosted on the cloud.

  • Cloud computing services benefit from dynamic scalability, which provides on-demand, timely, and dynamically changeable computing resources.

  • However, since the container cloud environment is very changeable and unpredictable, the environment exemplary derived as of static reward-penalty components might not be optimum. ADATSA algorithm does not take into account diversity of cloud resources. Users’ demands for cloud resources are often diverse, and operator responsibilities are typically completed by a combination of heterogeneous cloud services.

According to above gathered research gaps it needs proposed methodology. Hybrid optimal and deep learning is proposed for dynamic scalable task scheduling (DSTS). The main contributions are list as follows:

  • A modified multi-swarm coyote optimization (MMCO) algorithm is used for scaling the containers virtual resources which enhance customer service level agreements.

  • A modified pigeon-inspired optimization (MPIO) algorithm is proposed for task clustering and the fast adaptive feedback recurrent neural network (FARNN) is used for pre-virtual CPU allocation to ensure priority based scheduling.

  • The task load monitoring mechanism is designed based on deep convolutional neural network (DCNN) which achieves dynamic scheduling based on priority.

System design of proposed methodology

Before being deployed to the cloud, programmes must be imaged and encased in the container cloud podium. The purpose of charge planning is to assign container illustrations to the most appropriate node in order to create the most effective utilization of accessible means. The difficulty of mapping relationships between containers and nodes may be represented as task scheduling in container cloud. Figure 1 depicts the system architecture of the proposed dynamic scalable task scheduling (DSTS) paradigm. The DSTS model includes a number of processes, including container virtual resource scaling, task clustering, pre-virtual CPU allocation, and task load monitoring.

Fig. 1
figure 1

Dynamic scalable task scheduling (DSTS) model

Proposed methodology

In this section, we describe the following process such as containers virtual resources scaling, task clustering, pre-virtual CPU allocation and task load monitoring mechanism.

Container virtual resources scaling using MMCO algorithm

The goal of cloud service level agreements (SLAs) is for service providers to have a common understanding of priority areas, duties, warranties, and service providers. It specifies the dimensions and duties of the parties participating in the cloud setup, as well as the timeframe for reporting or resolving system vulnerabilities. As more firms depend on external suppliers for their vital systems, programmes, and data, service level agreements are becoming more important. The Cloud SLA assures that cloud providers satisfy specific enterprise-level criteria and provide clients a clear distribution. If the provider fails to satisfy the requirements of the guarantee, it may be subject to financial penalties such as service time credit. The modified multi-swarm coyote optimization (MMCO) method was used to scale virtual resources in containers, improving customer service level agreements. MMCO coyote population is split into two groups Fd consists of Fq each coyote; the number of coyotes in each pack is constant and consistent across all packs in the first suggestion. As a result, multiplying the algorithm’s total population gives algorithm’s entire population FdF and FqF.Furthermore, the social position of the people qth coyote from the woods dth cram everything in ath the current time has been specified.

$${SOC}_q^{d.a}=\overrightarrow{b}=\left({b}_1,{b}_2,..{b}_h\right)$$
(1)

where C demonstrates the number of elements that go into making a choice, It also means that the coyote has adapted to its environment \({FIT}_q^{d.a}\in J\). Establishing the social position of the people qth coyote from the woods dth a compilation of pth the dimension is specified via a vector.

$${SOC}_{d.p}^{q.a}= Ua+{j}_p.\left({na}_p-{Ua}_p\right)$$
(2)

where Uap and nap stands for, respectively, the bottom and top limits of the range pth choice variable and jp is a true random number created inside the range’s bounds [0, 1] Using a probability distribution that is uniform in nature.

To determine the fitness function of each coyote, Fq × Fd Coyotes in the environment, depending their socioeconomic situations

$${FIT}_q^{d.t}=m\left({SOC}_q^{d.a}\right)$$
(3)

In the case of a minimization problem, the solution’s Alpha dth crams everything in ath a split second in time

$${Alpha}^{d.A}=\left\{{SOC}_q^{\backslash d.A}\left|{\arg}_{q=\left\{1,2.\dots {f}_d\right\}}\min l\left({SOC}_q^{d.A}\right)\right.\right\}$$
(4)

MMCO integrates all of the coyote’s information and calculates the cultural propensity of each pack:

$${Cul}_p^{d.A}=\left\{\begin{array}{l}{z}_{\frac{\left({F}_T+1\right)}{2}.i}^{d.A}\kern2.52em {F}_d\; is\; odd\\ {}\frac{z_{\frac{Ft}{2}.i}^{d.A}+{z}_{\left(\frac{F_t}{2}+1\right).p}^{d.A}}{2}. otherwise\end{array}\right.$$
(5)

where ZD, the social standing of all coyotes in the region is indicated by the letter A. dth in a hurry Ath p in the price range at the given point in time [1, C]. At the same time, the Alpha has an effect on coyotes (δ1) and by the other coyotes in the pack (δ2),

$${\delta}_1={Alpha}^{d.A}-{SOC}_{qj_1}^{d.A}$$
(6)
$${\delta}_2={Cult}^{d.A}-{SOC}_{qj_2}^{d.A}$$
(7)

The alpha δ1 Influence distinguishes a coyote from the rest of the pack in terms of culture, Qj1, to the coyote leader, whereas the pack’s clout δ2, shows a cultural distinction from a random coyote Qj2, to the cultural tendencies of the pack. In MMCO algorithm, during the initialization of the method, the swarm, also known as stands, is randomly seeded to the search space.

$${a}_{s.p}={U}_p+{j}_{s.p}\times \left({X}_p-{U}_p\right)$$
(8)

where, as. p represents sth a hive of activity pth dimension, Up and Xp are the bottom and top edges of the solution space, respectively, and s, p is a range of uniformly generated random numbers [0, 1].

$$T=\arg \min \left\{l\left(\overrightarrow{a}\right)\right\}$$
(9)

To generate Multi swarm from this point, two different equations may be used.

$${K}_{A.p}={a}_{s.p}+\alpha \times \left({T}_p-{a}_{o.p}\right)$$
(10)
$${K}_{A.p}={a}_{s.p}+\alpha \times \left({a}_{s.p}-{a}_{o.p}\right)$$
(11)

where, sindices must not be identical and α factor of scalability. The equation used to update the dimension of a swarm that will be formed for a Swarm is an important part of the process. The working function of the process of container virtual resources scaling is given in Algorithm 1.

Algorithm 1
figure a

Container virtual resources scaling using MMCO algorithm

Task clustering using modified pigeon-inspired optimization (MPIO) algorithm

Clustering is a procedure that divides tasks into different categories depending on increasing application demand, such as load balancing clusters, high availability clusters, and compute clusters. The primary emphasis of load balancing clusters is resource use on the host system, particularly the virtual machine. These clusters are utilised to balance constant and dynamic loads, as well as to move the application from one cloud provider to the next. The second kind is fault-tolerant high-availability clusters that are built for tip failure. For task clustering, we used a modified pigeon-inspired optimization (MPIO) algorithm. The activation function ties the information about the concealed state of prior deadlines to the item in the current chronology, and it provides it to the entrance gate as follows:

$${H}_r=\upsilon \Big({X}_r{K}^H+{t}_{r-1}{v}^H+{b}_H\Big)$$
(12)

where ES is recall gate. Xr is input at each time step s and TS − 1 represent the previous time step’s hidden state T − 1. Ze is the input layer’s heaviness and ve is recurring heaviness of the concealed state. The be is the bias of the input layer. The following are the equations for the two tasks:

$${i}_r=\upsilon \left({X}_r{K}^i+{t}_{r-1}{v}^i+{b}_i\right)$$
(13)
$${\overset{\sim }{E}}_s=\tanh \left({X}_r{Z}^e+{t}_{r-1}{v}^e+{b}_e\right)$$
(14)
$${E}_r={E_{r-1}}^{\ast }{H}_r+{i_r}^{\ast }{\overset{\sim }{E}}_s$$
(15)

The hidden levels at which the sigmoid activation function is anticipated are determined by the output gate. To create a create output, sends to the newly changed cell level function and multiplies as follows.

$${Z}_r=\upsilon \Big({X}_r{X}^Z+{t}_{r-1}{v}^Z+{b}_Z\Big)$$
(16)
$${t}_r={Z_r}^{\ast}\tanh \left({E}_r\right)$$
(17)

The update gateway functions similarly to a forget-me-not and LSTM input gateway. The weight is multiplied by the current input, and the weight is multiplied by the level hidden at the prior time point. Using the sigmoid function to find the values of one from zero and one, the contributions of the two possibilities are merged

$${L}_r=\upsilon \left({X}_r{X}^L+{d}_{r-1}{v}^l+{b}_l\right)$$
(18)

where WS symbolize the gate for updating, the YS at a given time step, the input vector s while cS − 1 is the earlier output from preceding entities. The Ks is the mass of the input layer, and uW is the repeated mass. The bs is the bias of the input layer. The reset gate’s output is as follows:

$${s}_r=\upsilon \left({X}_r{K}^s+{t}_{r-1}{v}^S+{b}_S\right)$$
(19)

The reset gate is employed in the new memory phone to accumulate the in sequence of the preceding phase. The network will be able to choose just relevant earlier events in chronological sequence as a result of this. The present memory contact is as follows:

$${\overset{\sim }{E}}_r=\tanh \left({X}_rK+v\left({s}_r\Theta {d}_{r-1}\right)\right)$$
(20)
$${d}_r={L}_r\Theta {d}_{r-1}+\left(1-{L}_r\right)\Theta \upsilon \left(\overset{\sim }{E_r}\right)+{b}_d$$
(21)

Each pigeon has a specific scenario when it comes to the optimization challenge.

$${X}_i=\left[{x}_{i1},{x}_{i2},\dots {x}_{ic}\right]$$
(22)

where c is the scope of the problem to be tackled1, 2… M, M is the pigeons’ population; each pigeon has a velocity that is stated as follows:

$${u}_i=\left[{U}_{i1},{U}_{i2},\dots {U}_{im}\right]$$
(23)

First, figure out where the dust is in the search region and how fast it is moving. Then, as the number of repetitions grows, so does the difficulty, the ui can be updated by repeating the following steps

$${u}_i(r)={u}_i\left(r-1\right).{e}^{- sr}+ Rand.\left({X}_{FBest}-{X}_i\left(r-1\right)\right)$$
(24)

where S is the number of current iterations. Then the next xi is calculated as follows

$${x}_i(r)={x}_i\left(r-1\right)+{u}_i(r)$$
(25)
Algorithm 2
figure b

Task clustering using MPIO algorithm

As a result, the iteration position Mth can be updated by

$${X}_i(r)={X}_i\left(r-1\right)+ Rand.\left({X}_{Center}\left(r-1\right)-{X}_i\left(r-1\right)\right)$$
(26)
$${X}_{Center}(r)=\frac{\sum \limits_{i=1}^m{X}_i(r). fitness\left({X}_i(r)\right)}{m_p\sum \limits_{i=1}^m fitness\Big(\left({X}_i(r)\right)}$$
(27)
$${m}_q(r)= ceil\left(\frac{m_p\left(r-1\right)}{2}\right)$$
(28)

where H is the present number of the iteration H = 1, 2. …HMax, is the amount of iterations in which the signpost operator is active. The meaning of fitness is to be optimized:

$$fitness\left({X}_j(r)\right)={H}_{Max}\left({X}_j(r)\right)$$
(29)
$$fitness\left({X}_i(r)\right)=\frac{1}{H_{Min}\left({X}_i(r)\right)+\varepsilon }$$
(30)

The pigeon’s position will be close to the center point after each iteration which reaches the end RMax. Algorithm 2 describes the operation of the task clustering process utilising the MPIO algorithm.

Pre-virtual CPU allocation using FARNN technique

In cloud computing, the latest virtual processor planning techniques are essential to hide physical resources from running programs and reduce performance during virtualization. However, different QoS requirements for cloud applications make it difficult to evaluate and predict the behavior of virtual processors. Based on the evaluation process, a specific planning plan regulates virtual machine priorities when processing I/O requirements for equitable distribution. Our program evaluates the CPU intensity and I/O intensity of virtual machines, making them very effective in a wide range of tasks. Here we applied fast adaptive feedback recurrent neural network (FARNN) for pre-virtual CPU allocation phase to ensure the priority based scheduling.

The FARNN methodology is a set of computing techniques that use model and method learning to anticipate computer effects by simulating the human brain’s problematic-answering process. The three network layers of a normal FARNN approach are the input film, hidden film, and output film. For arrest forecast systems, the input film typically contains the current time interval’s recorded MAC address. The following is a format for the MAC address input vector at time T:

$$Y(T)=\left\{{y}_1,{y}_2,.\dots, {y}_j,\dots, {y}_l\right\}$$
(31)

At the current time, the all MAC address collection is denoted as Y(T). T stands for the overall quantity of MAC addresses in use at any one period. The jth Mac address detection is represented as yj respectively. The input and network weights are used to compute the hidden layer neutrons.

$$h(T)={Z_1^t}^{\ast }Y(T)+a$$
(32)

Output film associates the results of the Hidden film and converts them.

$$X(T)=f\left({Z_2^t}^{\ast }h(T)\right)=f\left({Z_2^t}^{\ast}\left({Z_1^t}^{\ast }Y(T)+a\right)\right)$$
(33)

The hidden layer output is denoted as h(T) and the output layer output is referred as X(T) respectively. From the Input to Hidden film the weight is denoted as \({Z}_1^t\) and from the Hidden film to the Output film is stated as \({Z}_2^t\) respectively. The activation function is indicated as f(.) and the random bias is denoted as an in the output layer. The Feature film is initially combined amongst the Input film and the Hidden film in the rapid adaptive to determine the transfer prospects of one MAC address. Because the present occupancy state is reliant on the past occupancy status, the transfer possibility and transfer possibility matrix may be utilized to measure those type of methods. The transfer matrix may be stated as follows, assuming that an occupant’s location in a place is either “in” or “out.”

$$tpm\left|{}_{yK}=\left[\begin{array}{l}{y}_K^{j-0}\kern0.6em {y}_K^{j-j}\\ {}{y}_K^{0-0}\kern0.6em {y}_K^{0-j}\kern0.24em \end{array}\right]\right.$$
(34)

The transition probability matrix of one load is denoted as tpmyK. In the transfer matrix, \({y}_K^{j-0}\) and \({y}_K^{j-j}\) indicate the noticed probability that single inhabitant whose position is “in” at the present period in any case be “out” and “in” at the following period, correspondingly, at the following period \({y}_K^{0-0}\) and \({y}_K^{0-j}\) signify the noticed possibility that one inhabitant whose position is “out” at the present period intermission would be “out” and “in” in the next period intermission. The possibility might be computed using Bayesian models and the observed conditional probability. For example

$${y}_K^{j-j}=p\left( state\kern0.34em observed=j\left| state\kern0.34em observed=j\right.\right)$$
(35)

The one MAC address occupied probability is

$${y}_K^{j-j}=\frac{\sum {M}_{1-1}}{\sum {M}_{1-1}+\sum {M}_{1-0}}$$
(36)
$${y}_K^{0-0}=\frac{\sum {M}_{0-0}}{\sum {M}_{0-0}+\sum {M}_{0-1}}$$
(37)

where M1 − 1 is the recurrence in which the possession grade changed from “in” to “in” and M1 − 0 is the frequencies in which the possession grade changed from “in” to “out” respectively. Similarly, M0 − 0 and M0 − 1 address the frequencies in which the possession grade changed from “out” to “out” and from “out” to “in” individually. As the estimated frequency changes, the preventative education database will be automatically updated. The transfer probability will be adjusted at the next estimate as the training database is refreshed. Because each MAC address in the load is given a probability, each MAC address may be represented as follows:

$${y}_K=\left\{{y}_K^{mac},{y}_K^{0-j},{y}_K^{j-j}\right\}$$
(38)

Update the input vector in the following,

$$Y(T)=\left\{{y}_1^{mac},{y}_1^{0-j},{y}_1^{j-j},{y}_2^{mac},{y}_2^{0-j},{y}_2^{j-j},\dots {y}_K^{mac},{y}_K^{0-j},{y}_K^{j-j}\right\}$$
(39)

After that, the feature layer may be structured as follows:

$$f(T)=\left\{Y(T),Y\left(T-1\right),Y\left(T-2\right),.\dots Y\left(T-\Delta T\right)\right\}$$
(40)

The length of time window is ΔT and at time T the vector of the Feature layer is f(T). Assuming the amount of MAC reports in the time window is K, then

$$f(T)=\left\{{y}_1^{mac},{y}_1^{0-j},{y}_1^{j-j},{y}_2^{mac},{y}_2^{0-j},{y}_2^{j-j},\dots {y}_K^{mac},{y}_K^{0-j},{y}_K^{j-j}\right\}$$
(41)

At regular intervals, the environment layer retains the hidden layer feedback signal, acting as a short-term memory to stress professional dependency. The rear cover layer’s output may be structured as follows:

$$h(T)=g\left({\omega}^1D\left(T-1\right)+{\omega}^2\left(f(T)\right)\right)$$
(42)
Algorithm 3
figure c

Pre-virtual CPU allocation using FARNN technique

The output of the context layer is

$$D\left(T-1\right)=\alpha D\left(T-2\right)+h\left(T-1\right)$$
(43)

where h(T) is referred as the output vector of the Hidden layer at time interval T, and D is the output vector of Context layer. ω1 is stated as the joining mass from the Context layer to the Hidden layer, and ω2 is the joining mass from the Feature layer to the Hidden layer. Α is the self-connected comment gain factor. G (•) represents the Hidden layer’s activation function. The mode of activation has been set to

$$g(y)=\frac{1}{1+{E}^{-y}}$$
(44)

The following is an example of a signal change from the Hidden film to the Output film:

$$x(T)={\omega}^3h(T)={\omega}^{3\ast }g\left({\omega}^1D\left(T-1\right)+{\omega}^2f(T)\right)$$
(45)

where is the output variable at period T, which in this case is the expected possession. ω3 is the joining mass from the Hidden layer to the Output layer. The following is the cost function for updating and learning connection weights:

$$e=\sum \limits_{T-1}^M{\left[x(T)-c(T)\right]}^2$$
(46)

c (t) is the actual occupancy output, and M is the size of training time samples. Algorithm 3 describes the process of pre-virtual CPU allocation.

Task load monitoring using DCNN method

There are five steps to the job load monitoring function: Data collecting and data filtering are the first two steps in the data collection process. 3) data gathering 4) examination of data 5) Issue a warning and file a complaint. Processing time, CPU speed from CPU probe, memory use, memory retrieval delay, power consumption, power consumption from power analysis, frequency, latency, and delay are all examples of information or quantity that the monitoring system should gather through various inquiries. Consider essential features of data gathering, such as structure, tactics, updating approaches, and kinds, to classify it. We employ a deep convolutional neural network (DCNN) to measure job load in this article. In DCNN, the scroll layer contains numerous filters that correspond to the intriguing local forms. The result is forwarded to a non-linear implementation function to generate a functional map. Also adjust the functional map that was constructed to reduce the calculated values by changing the properties. Stacking the scroll layers at the DCNN’s front end separates the local attributes from the source data at first, and then gradually adds volume as the next abstract layer is provided. A well-trained layer produces a new representation of the original form that can be classified most successfully. For this purpose, the spiral layer is also called the functional sample layer. An assortment with several fully connected layers is attached at the end of the coil layer. For the training set samples,

$$n=\left\{\left({y}^{(j)},{x}^{(j)}\right)\right\},\kern0.48em j=1,2,.\dots, n$$
(47)

Each sample has a feature vector y(j) and a label x(j) to go with it. By introducing the loss function, we may obtain the error. As demonstrated in following equation, the loss function has an overall error and a time order.

$$I\left(z,a\right)\approx \frac{1}{m}\sum \limits_{j=1}^mk\left({H}_{\left\{z,a\right\}}\left({y}^{(j)},{x}^{(j)}\right)\right)+\lambda \sum \limits_{j,i}{z}_{j,i}^2$$
(48)

Here, z represents the weight and ‘a’ denotes the bias value respectively. Also, the size of the batch is represented as m. The hyper parameter λ error regulates and controls error values. The dissimilarity amongst the created assessment and the real assessment is measured in square metres. It’s worded like this:

$$D=\frac{1}{2M}\sum \limits_y{\left\Vert x(y)-b(y)\right\Vert}^2$$
(49)

When calculating two gradients, the coefficient 1/2 is a normalization group that cancels the coefficient. Further derivatives can be simplified without causing side effects as a result of this. Also can modify the weight and offset to reduce losses depending on the look of the slope.

$$\Delta \omega =\left(b(y)-x(y)\right){\sigma}^{\hbox{'}}(w)y$$
(50)
$$\Delta a=\left(b(y)-x(y)\right){\sigma}^{\hbox{'}}(w)$$
(51)

In the neuron, the input is denoted as w; the activation function is represented as σ; the change in the weight is referred as Δω and the variation of the offset is stated as Δa respectively.

$${\omega}^{\left(m+1\right)}={\omega}^{(m)}-{\frac{\eta }{M}}^{\ast}\Delta \omega$$
(52)
$${a}^{\left(m+1\right)}={a}^{(m)}-{\frac{\eta }{M}}^{\ast}\Delta a$$
(53)

The learning rate is represented as η; the mth iteration weight and offset are denoted as ω(m) and a(m) respectively. The total number of loads is represented as M respectively. In Algorithm 4, we describe the working function of the task load monitoring using DCNN method.

Algorithm 4
figure d

Task load monitoring using DCNN method

Simulation results and analysis

In this part, we develop experimentations to test and assess the proposed dynamic scalable task scheduling (DSTS) model, and the simulation results are associated to current state-of-the-art models including ADATSA, LAEAS, PSOS, and the K8S planning machine.

  • To overcome the repeating scheduling issue, a self-accommodating task planning algorithm (ADATSA) is used [33]. The approach reduces the reliance of existing vibrant planning strategies on container cloud architecture and improves the connection between jobs and their runtime environments.

  • In the cloud system, the Learning automata based energy-aware scheduling (LAEAS) algorithm [37] is employed for real-time job planning.

  • In a container cloud context, the performance-based service oriented scheduling (PSOS) [38] has been utilised to handle planning problems such as average latency of service instances, resource consumption, and balancing.

  • Unlike Borg and Omega, which were built as completely Google-internal systems, the Kubernetes (K8S) scheduling engine [39] is open source.

Dataset description

Kubernetes (v1.16.2) was used to create an experimental setup on 53 servers with the similar specs as the investigational stage, comprising 3, 50 master and slave nodes. Furthermore, we utilised Python 3.7 as the major programming language for quality analysis implementation, with Anaconda Navigator integration and spyder and Jupyter as execution environments. The number of tasks in this simulation has been separated into five categories: task 1, task 2, task 3, task 4, and task 5. In job 1, we may use static scheduling with 128core and 64core CPU oriented resources as master and slave, respectively. In task 2, we may use memory-oriented resources master and slave of 256GB and 128GB, respectively, to create dynamic scheduling. In task 3, we may use time-based static scheduling with 1000GB master and slave disc oriented resources, respectively. Task 4 allows us to configure time-based dynamic scheduling with bandwidth-oriented master and slave resources of 10Gbps and 10Gbps, respectively. With the resource non-oriented master and slave as 3 and 50, we may examine test quality in job 5. Where resource non-oriented apps are ones in which the application’s resource needs are composed and there is no partiality for resources. Table 2 summarises the job partitioning and resource requirements. We employed recurrent distributions to mimic large-scale uses distribution due to a shortage of apps. The experiment began with a total of 100 applications, including 20 for each category of application. Table 3 describes the super-parameter settings of proposed optimization algorithm.

Table 2 Dataset descriptions
Table 3 Optimization algorithm super-parameter settings

Performance evaluation metrics

In this section, the simulation results of proposed DSTS classic is associated with the existing state-of-art models such as ADATSA, LAEAS, PSOS and K8S planning engine in terms of different service quality evaluation metrics are resource imbalance degree (DId), resource residual degree (DRd), response time (RT) and throughput (TH). The particulars of appropriate metrics are defined as proceeds:

$${D}_{Id}=\sum \limits_{i=1}^N\frac{L_r\left({\alpha}_i\right)}{N}$$
(54)
$${D}_{Rd}=\sum \limits_{i=1}^N\frac{S_r\left({\beta}_i\right)}{N}$$
(55)
$${R}_T=\frac{1}{N_{app}}\sum \limits_{j=1}^{N_{app}}{R}_T\;{WS}_{app}$$
(56)
$${T}_H=\frac{N_{req}\;{WS}_{app}}{T_{end}\;{WS}_{app}-{T}_{start}\;{WS}_{app}}$$
(57)

where Lr(αi) and Sr(βi) represents node resource imbalance degree (ref. eqn [18].) and node resource residual degree (ref. eqn [19].) respectively for N number of node resources. The response delay of web application represents as WSapp and Tend, Tstart denotes the start and end time of the test respectively.

Comparative analysis

Result comparison of Task-1

The influence of tasks on static scheduling performance of our new DSTS model is compared to that of the current ADATSA, LAEAS, PSOS, and K8S models in this scenario. The proposed and current task scheduling models are compared in terms of resource imbalance degree (DId) in Fig. 2. We can see from this graph that the DSTS model of static scheduling outperforms the ADATSA, LAEAS, PSOS, and K8S models. The suggested DSTS model has a resource imbalance degree (DId) of 12.698%, 10.000%, 7.895%, and 6.173%, respectively, lower than the current ADATSA, LAEAS, PSOS, and K8S models. Figure 3 shows the comparative analysis of resource residual degree (DRd) for the proposed and existing task scheduling models. We can see from this graph that the DSTS model of static scheduling outperforms the ADATSA, LAEAS, PSOS, and K8S models. The resource residual degree (DRd) of proposed DSTS model is 10.280%, 8.155%, 6.426% and 4.695% lower than the existing ADATSA, LAEAS, PSOS and K8S models respectively.

Fig. 2
figure 2

Comparative analysis of resource imbalance degree (DId) (Task-1)

Fig. 3
figure 3

Comparative analysis of resource residual degree (DRd) (Task-1)

Result comparison of Task-2

The influence of tasks on the dynamic scheduling presentation of our suggested DSTS model is associated to that of the current ADATSA, LAEAS, PSOS, and K8S models in this scenario. Figure 4 shows the comparative analysis of resource imbalance degree (DId) for the proposed and existing task scheduling models. We can see from this graph that the DSTS dynamic scheduling model outperforms the ADATSA, LAEAS, PSOS, and K8S models. The resource imbalance degree (DId) of proposed DSTS model is 15.275%, 9.285%, 8.590% and 6.699% lower than the existing ADATSA, LAEAS, PSOS and K8S models respectively. Figure 5 shows the comparative analysis of resource residual degree (DRd) for the proposed and existing task scheduling models. We can see from this graph that the DSTS model of dynamic scheduling outperforms the ADATSA, LAEAS, PSOS, and K8S models. The resource residual degree (DRd) of proposed DSTS model is 11.710%, 8.555%, 6.740% and 5.462% lower than the existing ADATSA, LAEAS, PSOS and K8S models respectively.

Fig. 4
figure 4

Comparative analysis of resource imbalance degree (DId) (Task-2)

Fig. 5
figure 5

Comparative analysis of resource residual degree (DRd) (Task-2)

Result comparison of Task-3

In this scenario, the influence of tasks on our proposed DSTS model’s time-based static scheduling performance is compared to the current ADATSA, LAEAS, PSOS, and K8S models. Figure 6 shows the comparative analysis of resource imbalance degree (DId) with respect to time for the proposed and existing task scheduling models. We can see from this graph that the DSTS model of static scheduling outperforms the ADATSA, LAEAS, PSOS, and K8S models. The resource imbalance degree (DId) of proposed DSTS model is 15.146%, 15.275%, 9.285% and 8.590% lower than the existing ADATSA, LAEAS, PSOS and K8S models respectively. Figure 7 shows the comparative analysis of resource residual degree (DRd) with respect to time for the proposed and existing task scheduling models. We can see from this graph that the DSTS model of static scheduling outperforms the ADATSA, LAEAS, PSOS, and K8S models in terms of performance. The resource residual degree (DRd) of proposed DSTS model is 6.796%, 11.710%, 8.555% and 6.740% lower than the existing ADATSA, LAEAS, PSOS and K8S models respectively.

Fig. 6
figure 6

Comparative analysis of resource imbalance degree (DId) with time (Task-3)

Fig. 7
figure 7

Comparative analysis of resource residual degree (DRd) with time (Task-3)

Result comparison of Task-4

In this scenario, the influence of tasks on our proposed DSTS model’s time-based dynamic scheduling performance is compared to the current ADATSA, LAEAS, PSOS, and K8S models. Figure 8 shows the comparative analysis of resource imbalance degree (DId) with respect to time for the proposed and existing task scheduling models. We can see from this graph that the DSTS model of static scheduling outperforms the ADATSA, LAEAS, PSOS, and K8S models. The resource imbalance degree (DId) of proposed DSTS model is 13.763%, 15.146%, 12.878% and 11.781% lower than the existing ADATSA, LAEAS, PSOS and K8S models respectively. Figure 9 shows the comparative analysis of resource residual degree (DRd) with respect to time for the proposed and existing task scheduling models. We can see from this graph that the DSTS model of static scheduling outperforms the ADATSA, LAEAS, PSOS, and K8S models. The resource residual degree (DRd) of proposed DSTS model is 6.703%, 6.796%, 11.710% and 8.555% lower than the existing ADATSA, LAEAS, PSOS and K8S models respectively.

Fig. 8
figure 8

Comparative analysis of resource imbalance degree (DId) with time (Task-4)

Fig. 9
figure 9

Comparative analysis of resource residual degree (DRd) with time (Task-4)

Result comparison of Task-5

In this scenario, the effect of our proposed DSTS model’s quality validation is compared to the current ADATSA, LAEAS, PSOS, and K8S models. Figure 10 shows the comparative analysis of resource imbalance degree (DId) with respect to time for the proposed and existing task scheduling models. We can see from this graph that the DSTS model of static scheduling outperforms the ADATSA, LAEAS, PSOS, and K8S models. The resource imbalance degree (DId) of proposed DSTS model is 13.965%, 13.763%, 15.146% and 12.878% lower than the existing ADATSA, LAEAS, PSOS and K8S models respectively. Figure 11 shows the comparative analysis of resource residual degree (DRd) with respect to time for the proposed and existing task scheduling models. We can see from this graph that the DSTS model of static scheduling outperforms the ADATSA, LAEAS, PSOS, and K8S models in terms of performance. The resource residual degree (DRd) of proposed DSTS model is 13.445%, 6.703%, 6.796% and 11.710% lower than the existing ADATSA, LAEAS, PSOS and K8S models respectively.

Fig. 10
figure 10

Comparative analysis of resource imbalance degree (DId) with time (Task-5)

Fig. 11
figure 11

Comparative analysis of resource residual degree (DRd) with time (Task-5)

Table 4 describes the performance comparison of proposed and existing task scheduling in terms of response time (RT) and throughput (TH) with varying simulation time. The average response time (RT) of proposed DSTS model is 25.448%, 32.616%, 37.814% and 40.502% higher than the existing ADATSA, LAEAS, PSOS and K8S models respectively. Figure 12 gives the graphical representation of proposed and existing task scheduling models. The average throughput (TH) of proposed DSTS model is 33.168%, 38.119%, 44.059% and 49.010% higher than the existing ADATSA, LAEAS, PSOS and K8S models respectively. Figure 13 gives graphical representation of proposed and existing task scheduling models. Figure 14 denotes the runtime overhead of the proposed and existing task scheduling models. The plot clearly depicts average runtime overhead of the proposed DSTS model is 12.356%, 15.09%, 18.367% and 21.578% lower than the existing ADATSA, LAEAS, PSOS and K8S models respectively.

Table 4 Comparative analysis of quality of service metrics
Fig. 12
figure 12

Comparative analysis of response time (RT) (Task-5)

Fig. 13
figure 13

Comparative analysis of Throughput (TH) (Task-5)

Fig. 14
figure 14

Comparative analysis of runtime overhead

Case study

In the past, Kaplan used the Amazon Elastic Compute cloud to host its applications. Working engineers were required to manually update applications, and on average there were four dedicated Amazon EC2 hosts. Rowan Drabo, head of Kaplan cloud operations, said the application update would take hours to take effect. Cost analysis shows that we spend more than $ 500 per month on the Amazon Elastic Compute cloud. After switching to micro-service-based architecture with Amazon’s flexible container service and containers, Kaplan saved significant costs. “We currently have more than 500 containers in production,” Drabo said. We have reduced the number of Amazon Flexible Compute cloud events by 70%, resulting in 40% cost savings per application. Using our proposed Dynamic Scalable Task Scheduler (DSTS) for automated container delivery, Kaplan allows you to reduce deployment time, increase the frequency of updates and improve developer satisfaction.

Conclusion

For dynamic scalable task scheduling (DSTS) in a container cloud context, we suggested a hybrid optimum and deep learning approach. The succeeding are the major influences made in this paper:

  1. 1.

    A modified multi-swarm coyote optimization (MMCO) method for scaling virtual resources in containers to improve customer service level agreements.

  2. 2.

    A modified pigeon-inspired optimization (MPIO) algorithm is for task clustering and fast adaptive feedback recurrent neural network (FARNN) for pre-virtual CPU allocation to ensure priority based scheduling.

  3. 3.

    Task load monitoring mechanism is designed based on deep convolutional neural network (DCNN) which achieves dynamic scheduling based on priority.

After the recreation outcomes, we concluded that the simulation results of projected DSTS model is very effective compared to the existing task scheduling models in terms of excellence of service metrics are resource imbalance degree (DId), resource residual degree (DRd), response time (RT) and throughput (TH). In future, we extend our DSTS model which combine with the optimization algorithm to optimize joint problems i.e. resource allocation and task scheduling in container cloud environment.

References

  1. Wang B, Qi Z, Ma R, Guan H, Vasilakos AV (2015) A survey on data center networking for cloud computing. Comput Netw 91:528–547

    Article  Google Scholar 

  2. González-Martínez JA, Bote-Lorenzo ML, Gómez-Sánchez E, Cano-Parra R (2015) Cloud computing and education: a state-of-the-art survey. Comput Educ 80:132–151

    Article  Google Scholar 

  3. Khan AN, Kiah MM, Khan SU, Madani SA (2013) Towards secure mobile cloud computing: a survey. Futur Gener Comput Syst 29(5):1278–1299

    Article  Google Scholar 

  4. Xie XM, Zhao YX (2013) Analysis on the risk of personal cloud computing based on the cloud industry chain. J China Univ Posts Telecommun 20:105–112

    Article  Google Scholar 

  5. Han Y, Luo X (2013) Hierarchical scheduling mechanisms for multilingual information resources in cloud computing. AASRI Proc 5:268–273

    Article  Google Scholar 

  6. Bose R, Luo XR, Liu Y (2013) The roles of security and trust: comparing cloud computing and banking. Procedia Soc Behav Sci 73:30–34

    Article  Google Scholar 

  7. Elamir AM, Jailani N, Bakar MA (2013) Framework and architecture for programming education environment as a cloud computing service. Proc Technol 11:1299–1308

    Article  Google Scholar 

  8. Tsertou A, Amditis A, Latsa E, Kanellopoulos I, Kotras M (2016) Dynamic and synchromodal container consolidation: the cloud computing enabler. Transp Res Proc 14:2805–2813

    Google Scholar 

  9. Kong W, Lei Y, Ma J (2016) Virtual machine resource scheduling algorithm for cloud computing based on auction mechanism. Optik 127(12):5099–5104

    Article  Google Scholar 

  10. Moschakis IA, Karatza HD (2015) A meta-heuristic optimization approach to the scheduling of bag-of-tasks applications on heterogeneous clouds with multi-level arrivals and critical jobs. Simul Model Pract Theory 57:1–25

    Article  Google Scholar 

  11. Singh S, Chana I (2015) QRSF: QoS-aware resource scheduling framework in cloud computing. J Supercomput 71(1):241–292

    Article  Google Scholar 

  12. Lin J, Zha L, Xu Z (2013) Consolidated cluster systems for data centers in the cloud age: a survey and analysis. Front Comput Sci 7(1):1–19

    MathSciNet  Article  Google Scholar 

  13. Kertész A, Dombi JD, Benyi A (2016) A pliant-based virtual machine scheduling solution to improve the energy efficiency of iaas clouds. J Grid Comput 14(1):41–53

    Article  Google Scholar 

  14. Musa IK, Walker SD, Owen AM, Harrison AP (2014) Self-service infrastructure container for data intensive application. J Cloud Comput 3(1):1–21

    Article  Google Scholar 

  15. Choe R, Cho H, Park T, Ryu KR (2012) Queue-based local scheduling and global coordination for real-time operation control in a container terminal. J Intell Manuf 23(6):2179–2192

    Article  Google Scholar 

  16. Nam H, Lee T (2013) A scheduling problem for a novel container transport system: a case of mobile harbor operation schedule. Flex Serv Manuf J 25(4):576–608

    Article  Google Scholar 

  17. Bian Z, Li N, Li XJ, Jin ZH (2014) Operations scheduling for rail mounted gantry cranes in a container terminal yard. J Shanghai Jiaotong Univ Sci 19(3):337–345

    Article  Google Scholar 

  18. Zhang R, Yun WY, Kopfer H (2010) Heuristic-based truck scheduling for inland container transportation. OR Spectr 32(3):787–808

    Article  Google Scholar 

  19. Briskorn D, Fliedner M (2012) Packing chained items in aligned bins with applications to container transshipment and project scheduling. Mathem Methods Oper Res 75(3):305–326

    MathSciNet  Article  Google Scholar 

  20. Briskorn D, Angeloudis P (2016) Scheduling co-operating stacking cranes with predetermined container sequences. Discret Appl Math 201:70–85

    MathSciNet  Article  Google Scholar 

  21. Zhao D, Mohamed M, Ludwig H (2018) Locality-aware scheduling for containers in cloud computing. IEEE Trans Cloud Comput 8(2):635–646

    Article  Google Scholar 

  22. Liu B, Li P, Lin W, Shu N, Li Y, Chang V (2018) A new container scheduling algorithm based on multi-objective optimization. Soft Comput 22(23):7741–7752

    Article  Google Scholar 

  23. Lin M, Xi J, Bai W, Wu J (2019) Ant colony algorithm for multi-objective optimization of container-based microservice scheduling in cloud. IEEE Access 7:83088–83100

    Article  Google Scholar 

  24. Adhikari M, Srirama SN (2019) Multi-objective accelerated particle swarm optimization with a container-based scheduling for Internet-of-Things in cloud environment. J Netw Comput Appl 137:35–61

    Article  Google Scholar 

  25. Ranjan R, Thakur IS, Aujla GS, Kumar N, Zomaya AY (2020) Energy-efficient workflow scheduling using container-based virtualization in software-defined data centers. IEEE Trans Industr Inform 16(12):7646–7657

    Article  Google Scholar 

  26. Chen Q, Oh J, Kim S, Kim Y (2020) Design of an adaptive GPU sharing and scheduling scheme in container-based cluster. Clust Comput 23(3):2179–2191

    Article  Google Scholar 

  27. Hu Y, Zhou H, de Laat C, Zhao Z (2020) Concurrent container scheduling on heterogeneous clusters with multi-resource constraints. Futur Gener Comput Syst 102:562–573

    Article  Google Scholar 

  28. Rajasekar P, Palanichamy Y (2020) Scheduling multiple scientific workflows using containers on IaaS cloud. 7621–7636 (2021) J Ambient Intell Humaniz Comput 1–16

  29. Menouer T (2021) KCSS: Kubernetes container scheduling strategy. J Supercomput 77(5):4267–4293

    Article  Google Scholar 

  30. Li C, Zhang Y, Luo Y (2021) Neighborhood search-based job scheduling for IoT big data real-time processing in distributed edge-cloud computing environment. J Supercomput 77:1853–1878

    Article  Google Scholar 

  31. Ahmad I, AlFailakawi MG, AlMutawa A, Alsalman L (2021) Container scheduling techniques: a survey and assessment. Journal of King Saud University-Computer and Information Sciences 34(2022):3934-3947

  32. Rausch T, Rashed A, Dustdar S (2021) Optimized container scheduling for data-intensive serverless edge computing. Futur Gener Comput Syst 114:259–271

    Article  Google Scholar 

  33. Zhu L, Huang K, Hu Y, Tai X (2021) A self-adapting task scheduling algorithm for container cloud using learning automata. IEEE Access 9:81236–81252

    Article  Google Scholar 

  34. Armbrust M, Fox A, Griffith R, Joseph AD, Katz R, Konwinski A, Lee G, Patterson D, Rabkin A, Stoica I et al (2010) A view of cloud computing. Commun ACM 53(4):50–58

    Article  Google Scholar 

  35. Gawali MB, Shinde SK (2018) Task scheduling and resource allocation in cloud computing using a heuristic approach. J Cloud Comp 7:4

    Article  Google Scholar 

  36. Gawali MB, Gawali SS (2021) Optimized skill knowledge transfer model using hybrid Chicken Swarm plus Deer Hunting Optimization for human to robot interaction. Knowl-Based Syst 220:106945

    Article  Google Scholar 

  37. Sahoo S, Sahoo B, Turuk AK (2018) An energy-efficient scheduling framework for cloud using learning automata. In: 2018 9th International Conference on Computing, Communication and Networking Technologies (ICCCNT). IEEE, Bangalore, India. pp 1–5

    Google Scholar 

  38. Li H, Wang X, Gao S, Tong N (2020) A service performance aware scheduling approach in containerized cloud. In: 2020 IEEE 3rd International Conference on Computer and Communication Engineering Technology (CCET). IEEE, Beijing, China. pp 194–198

    Chapter  Google Scholar 

  39. Burns B, Grant B, Oppenheimer D, Brewer E, Wilkes J (2016) Borg, omega, and kubernetes. Commun ACM 59(5):50–57

    Article  Google Scholar 

Download references

Funding

The authors declare that no funds, grants, or other support were received during the preparation of this manuscript.

Author information

Authors and Affiliations

Authors

Contributions

Mr. Saravanan Muniswamy has made substantial contributions to design and in drafting the manuscript. Mr. Vignesh Radhakrishnan has made HIS contributions in acquisition of data and interpretation of data. The author(s) read and approved the final manuscript.

Corresponding author

Correspondence to Saravanan Muniswamy.

Ethics declarations

Competing interests

The authors have no relevant financial or non-financial interests to disclose.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Muniswamy, S., Vignesh, R. DSTS: A hybrid optimal and deep learning for dynamic scalable task scheduling on container cloud environment. J Cloud Comp 11, 33 (2022). https://doi.org/10.1186/s13677-022-00304-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13677-022-00304-7

Keywords

  • Cloud container
  • Task scheduling
  • Virtual resources
  • Task clustering
  • Priority based scheduling
  • Load monitoring