Skip to main content

Advances, Systems and Applications

Table 5 Description of the benchmarking metrics used for the evaluation of Severity-based techniques

From: Severity: a QoS-aware approach to cloud application elasticity

Benchmarking metric

Description

Availability

The percentage of time for which the workload pattern for a particular metric was not demanding more than the processing capacity of the infrastructure offered. We do not distinguish whether the processing capacity was exceeded by a small or large margin – in both cases the service is considered unavailable for the purpose of our experiments. Moreover, we considered that the lack of availability of the service at a particular instance of time does not influence the ability of the service to handle the workload correctly as soon as it receives the resources which are required.

Overprovisioning

The product of the extraneous VM instances which were used (compared to the optimal) with the percentage of simulation time for which they were spawned.

Rigidness

The time percentage of a simulation, for which the application was working either above or even below the rule thresholds set. For example, if a rule on a metric states that a scale out action should happen when the value of the metric surpasses 70%, while a scale in action should happen when the value of the metric drops below 30%, the system is considered to be exhibiting ‘rigidness’ when the value of the metric is greater than 70% or lower than 30%.

Number of scaling adaptations

The total number of scaling adaptations (associated with an addition or removal of a number of VMs) which were performed by the platform. The first deployment is also counted in this number.