Advances, Systems and Applications
From: VTGAN: hybrid generative adversarial networks for cloud workload prediction
 | Authors | Method | Dataset | Weakness |
---|---|---|---|---|
Time-series | Calheiros et al. [17] | - ARMA | - Wikimedia Foundation real traces [5] | - Time-series models are not suitable for high volatile workloads, and there is no superior model for all tested datasets. - These models could not fit with long-term time-series data. |
 | Vazquez et al. [81] | AR, MA, SES, DES, ETS, | - Google [3] | |
 |  | and ARIMA | - Intel Netbatch logs | |
 | Kim et al. [46] | AR, ARMA, ARIMA, EMA, | - Synthetic workloads: Growing | |
 |  | DES, WMA, and Gaussian-DES | & On/Off & Bursty & Random | |
 | Hu et al. [38] | MA, AR, ARIMA, DM, and MM | - 30 min. from esc.tl.small instance | |
 | Fu and Zhou [28] | - ARIMA | - PlanetLab [4] | |
 |  |  | ||
 | Aldossary et al. [7] | - ARIMA | - Collected from OpenNebula testbed | |
 | Gai et al. [29] | WMA, CMA, MA | - | |
 | Zhu and Agrawal [88] | - ARMAX | - | |
Machine learning | Farahnakian et al. [25] | - LR | - Random workload - PlanetLab | - ML models did not achieve high prediction accuracy with high dispersal data. - These models could not fit with non-linear and complex data as cloud workloads. |
 | Farahnakian et al. [26] | - KNN |  | |
 | Patel et al. [63] | - SVR | - Idle workload | |
 |  |  | - Web workload | |
 |  |  | - Stress workload | |
 | Cortez et al. [21] | - Gradient boosting tree | - Azure workload | |
 |  | - Random Forest |  | |
 | Nguyen et al. [34] | - MLR | ||
 |  |  | - PlanetLab | |
 | Moghaddam et al. [55] | LR, MLP, SVR, AdaBoost, | - PlanetLab | |
 |  | Random Forest, Gradient |  | |
 |  | Boosting, Decision Tree |  | |
Deep learning | Zhang et al. [87] | - RNN | - DL models did not achieve acceptable prediction accuracy due to very long-term dependencies, complex, and non-linearity of cloud data. | |
 | Duggan et al. [24] | - RNN | - PlanetLab | |
 | Huang et al. [36] | - RNN-LSTM | - Real requests data | |
 | Yang et al. [84] | - Echo state network (ESN) | ||
 | Song et al. [76] | - LSTM | ||
 | Chen et al. [19] | - Auto-Encoder GRU | ||
 |  |  | - Alibaba traces [1] | |
 | Peng et al. [64] | - GRU based encoder-decoder | ||
 |  | network | - Dinda [2] | |
 | Zhu et al. [89] | - Attention-based LSTM | - Alibaba traces | |
 |  |  | - Dinda | |
 | Mozo et al. [56] | - CNN | - ONTS dataset | |
Hybrid | Liu et al. [52] | - ARIMA-LSTM | - Although its accuracy with non-linearity and very long-term dependencies, it is more complex. | |
 | Shuvo et al. [73] | - LSTM-GRU (LSRU) | - Bitbrains [10] | |
 | Bi et al. [13] | - BG-LSTM | ||
 | Ouhame et al. [59] | - CNN-LSTM | - Bitbrains | |
 | Yazdanian and | - GAN (LSTM-CNN) | - Calgary | |
 | Sharifan [85] |  | - NASA | |
 |  |  | - Saskatchewan | |
 | BHyPreC [44] | - Bi-LSTM | - Bitbrains | |
 | VTGAN | - GAN (Bi-GRU-CNN) | - PlanetLab | |
 |  | - GAN (Bi-LSTM-CNN) |  |