Advances, Systems and Applications
From: AI-empowered game architecture and application for resource provision and scheduling in multi-clouds
References | Solved problem | Process | Advantage |
---|---|---|---|
[23] | Action is of low importance in some states | The advantages and disadvantages of state and action, are respectively analyzed | Lessened the range of Q-value |
[24] | Overestimation | Decomposing the max operation | More stable training results |
[25] | Changed samples in experience replay | Improved the experience buffer training policy | Improved the performance of DDQN |
[4] | Optimized execution time and cost | A pheromone update rule is designed | Better global search ability |
[6] | Improved resource utilization, processing cost, and transmission time | The task scheduling is performed in two phases | Reduced makespan for tasks |
[9] | Optimized load balancing | Each firefly flies towards a firefly that looks brighter than itself | Reduced transmission cost of workflow |
[13] | Met the QoS requirements of users | Learned from its experiences without prior knowledge | Improved user satisfaction |
[14] | Delay-sensitive task scheduling | Designed a reward function to reduce the average timeout period of tasks | Improved the scheduling efficiency of server-side tasks |
[15] | Model-free policy for continuous action | Combines DPG and DQN | continuous action space |
[16] | Scalable parallel tasks | A fully connected layer and an output layer | Improved task scheduling performance |
[20] | Reduced energy consumption | Used the task priority to calculate the critical resources of the task graph | Reduced energy consumption in the data center |
[26] | Optimize multiple objectives | Trained two DRL-based agents as scheduling agents | Reduced the average job duration |
[27] | The efficiency of resource management | Proposed a blacklist mechanism | Converged quickly |