- Open Access
Cloud computing and its interest in saving energy: the use case of a private cloud
© Basmadjian et al.; licensee Springer. 2012
- Received: 20 January 2012
- Accepted: 18 May 2012
- Published: 8 June 2012
Cloud computing data centres, due to their housing of powerful ICT equipment, are high energy consumers and therefore accountable for large quantities of emissions. Therefore, energy saving strategies applicable to such data centres are a very promising research direction both from the economical and environmental stand point.
In this paper, we study the case of private cloud computing environments from the perspective of energy saving incentives. However, the proposed approach can also be applied to any computing style: cloud (both public and private), traditional and supercomputing. To this end, we provide a generic conceptual description for ICT resources of a data centre and identify their corresponding energy-related attributes. Furthermore, we give power consumption prediction models for servers, storage devices and network equipment. We show that by applying appropriate energy optimisation policies guided through accurate power consumption prediction models, it is possible to save about 20% of energy consumption when typical single-site private cloud data centres are considered.
Minimising the data centre’s energy consumption, on one hand acknowledges the potential of ICT for saving energy across many segments of the economy, on the other hand helps ICT sector to show the way for the rest of the economy by reducing its own carbon footprint. In this paper, we show that it is possible to save energy by studying the case of a single-site private cloud data centres. We believe that through the federation of several cloud data centres (both private and public), it is possible to minimise both the energy consumption as well as CO2 emissions.
- Private cloud computing data centre
- IT resources
- Power and energy consumption
Energy consumption of data centres is becoming a key concern for their owners: energy costs (fuel) continue to rise and CO2emissions related to this consumption are relevant. In 2007, Gartner estimated that the global impact of the ICT sector (considering PCs, servers, cooling, fixed and mobile telephony, local area network, office telecommunications and printers) is 2% of the global CO2 emissions, which is approximately the same as fuel consumption from the airline industry.
Therefore, saving money in the energy budget of a data centre, without sacrificing Service Level Agreements (SLA) is an excellent incentive for data centre owners, and would at the same time be a great success for environmental sustainability. This aspect needs to be highlighted, since it’s absolutely not frequent in environment-related problems to have a solution satisfying all stake holders.
One of the latest trends in IT is cloud computing, where public and private deployment models are commonly used to differentiate one cloud provider from another. The former is made available to the general public on pay-per-use basis, whereas the latter is operated solely for internal users of organisations.
Since elasticity is one of the key aspects of a cloud service, the cloud provider gives the impression of having an infinite set of resources. In fact, large public cloud providers have end users potentially spread over the globe and can capitalise on statistical compensation for large number of service requests; practically speaking, since cloud resources are frequently allocated and also released (as side effect of the pay-per-use billing mode), during the time lots of allocations and de-allocations tend to compensate keeping fluctuations under reasonable control. On the other hand, private clouds, instead, have generally a much smaller number of users (e.g. the employees of a corporation) and the provider needs to size its ICT infrastructure for the peak usage. Since the variety of usage patterns is limited inside a close community, it’s quite likely that the utilisation rate of the ICT resources can vary a lot between night and day and/or week days and week-ends. Therefore, private cloud providers need to keep a larger capacity buffer compared to public providers, and suffer more from load variations. In addition, it has been noticed that some public cloud providers are starting to offer discounted prices for certain time slots, when they foresee a resource usage gap – something similar to the concept of “last minute” tickets for travellers – where private providers don’t have this chance!
Given the above-mentioned differences in the ICT resources’ utilisation patterns between private and public cloud providers, it is obvious that more evident advantages are foreseen to save energy in former than in latter where the owners – in case of low load – instead of saving energy and costs, can decide to attract additional business by lowering the prices to fill the utilisation gap. Private cloud provider – in case of low load – has the possibility to see the whole infrastructure optimised to run the load with the lowest energy consumption, while preserving the SLAs with respect to their users.
In summary, there are lots of incentives for any cloud provider – being public or private – to save energy. The opportunities for public cloud providers might depend on alternative business conducts, while private cloud providers will likely get clear benefits.
Contributions and results
In general, energy savings can be achieved in data centres through optimisation mechanisms whose main objective is to minimise the energy consumption. However, in order that these optimisation mechanisms can take the most suitable energy-saving decisions, the existence of accurate power consumption prediction models becomes primordial. Consequently, one of the major contributions of this paper is devoted to a detailed description of power consumption prediction models for ICT resources of data centres which are presented in Section Power consumption prediction models. Note that the details on the optimisation algorithms used for our use-case study are out of the scope of this paper and interested readers can refer to.
The architecture of our energy-saving mechanism is based on a three-step control loop: Optimisation, Reconfiguration and Monitoring. The whole state of the data centre is continuously monitored. As a matter of fact, another major contribution of this paper is dedicated to a detailed description of the state of a data centre in terms of ICT resources with their relevant energy-related attributes and interconnections which is introduced in Section Data centre schema. This state is periodically examined by the Optimisation module in order to find alternative software application and services deployment configurations that allow saving energy. Once a suitable energy-saving configuration is detected, the loop is closed by issuing a set of actions on the data centre to reconfigure the deployment to this energy-saving setup.
Finally, we demonstrate the results obtained through single-site optimisation policies of a private cloud computing data centre. In Section Evaluation and results these results are presented which illustrate the possibility of saving about 20% on the total consumption of ICT equipment inside a private cloud computing data centre. It is worthwhile to note that our power consumption prediction models as well as state description of a data centre are generic enough that they are suitable to energy-saving optimisation mechanisms applicable to any computing style being traditional, super and cloud computing.
In the existing literature, the power consumption of a server has been modelled in two different ways: offline and online. In the former case, SimplePower, SoftwareWatt and Mambo estimate the power consumption of an entire server. However, these models use analytical methods based on some low-level information such as number of used CPU cycles. The major advantage is that they provide high accuracy. Nevertheless, the offline nature of such models requires extensive simulation, which results in a significant amount of time for estimating the power consumption. Consequently, these models are infeasible for predicting the power consumption of highly dynamic environments like cloud computing data centres.
To overcome this problem, an online (run-time) methodology is proposed[9, 10]. Such models are based on the information monitored through performance counters. These counters keep track of activities performed by applications such as amount of accesses (e.g. to caches) and switching activities within processors. The total power dissipation of a server is computed as the power consumption of each activity. However, these counters in certain processors (e.g. AMD Opteron) can report only four out of 84 events. Therefore, such models are unable to predict accurate power consumption in real-life cases.
Another run-time methodology is to use high-level information as the one proposed by. These authors assumed that processors are the main contributors to the total server’s power consumption. Thus, a linear model based on the processor’s utilisation is proposed. However, such a model suffers from a significant inaccuracy as the server’s power consumption is not exactly linear. The key reason is that the impact of other components (e.g. multiple level caches, RAM, I/O activities) and their interactions are not considered. To prevent this problem, designed a component level model. Through this approach, a calibration phase is performed before predicting the power consumption of a server. During this phase, this model analyses the system parameters (e.g. CPU utilisation, hard disk I/O rate) influencing on its power consumption. However, implementing such a model within a data centre (having homogenous and/or heterogenous resources) is very difficult since it needs calibration whenever a new hardware is installed within the existing servers. Since the component level is flexible for modelling a generic server, we have also adopted the same approach as. In contrast to this model, which provides one linear model for the whole servers, our approach designs different models for different components based on their behaviours. Another distinguishing property is that our approach does not need calibration phase.
It is worthwhile to note that above mentioned approaches, which depend on low-level information in order to predict the power consumption, are not appropriate in real-life case simply because the underlying monitoring systems of the data centres are not able to provide the low-level information that these approaches require. Consequently, in this paper we identified the most-relevant energy-related attributes of ICT resources to which the monitoring systems of data centres typically provide the necessary information.
The storage devices range from a single hard disk to SAN (Storage Area Network) devices, which consume a significant amount of power. Several studies[15–18] were dedicated to devise models for individual hard disks, where the power consumption of a disk is predicted based on its states such as seek, rotation, reads, writes and idle. With the emergence of Disk arrays (e.g. RAID), the above mentioned approaches need to be adapted to deal with several hard disks instead of one. Several models[19–21] for RAID have been proposed. proposed STAMP (Storage Modeling for Power) by mapping the front-end workload to back-end disk activities, and the power consumption of the back-end activities is computed. A major concern is that this model ignores the power of some activities such as spin-up and spin-down. generalised the model of individual hard disks to RAID where the overall power of RAID is the sum of power consumed by each individual hard disk within RAID. However, this model does not take into account the power of RAID controller’s processor and memory. addressed this problem by proposing a model (MIND), which computes the power of RAID by considering controller’s power and disk activities such as idle, sleep, random, sequential and cache accesses.
The key drawback of these RAID models is their implementation in the real world such as data centres due to the low level input parameters. For instance, hard disk’s state (e.g. idle, accessing, startup) and operation mode (read/write operations) are usually not available at data centre level, where a monitoring system provide information only about average read and write rates over a time period (e.g. per second). Furthermore, the total power consumption of SAN devices, which are used for storage within data centres, cannot be computed through RAID models since SAN has also processing (e.g. CPU, RAM) and network components (I/O ports) in addition to hard disks. In this paper, we propose a model, which can be applied to any type of storage devices. In contrast to the existing models, our approach requires information such as read/write rate, which is usually available within data centres monitoring systems.
There has been a great deal of attention to devise power consumption models for routers and switch fabrics. The power consumption of the integrated Alpha 21364 router and the IBM InfiniBand 8-port 12X switch is modelled in illustrating that buffers are the largest power hoggers in routers. A crossbar switch, if present, consumes less but still significant power, and arbiter power is negligible under high network load. It is shown, that also the router micro-architecture has a huge impact on router power consumption. suggest a framework to estimate the power consumption on switch fabrics in network routers. They state that the power consumption on switch fabrics is mainly determined by the internal node switches, the internal buffer queues and the interconnect wires. present a power and area model according to network-on-chip architectures. They suggest ORION 2.0 that uses a set of architectural power models for network routers to perform early-stage network-on-chip power estimation.
The ICT resources (e.g. servers, storage devices and network equipment) with their relevant energy-related attributes and interconnections are represented by the data centre schema. The data centre operator identifies all the equipment that the site is composed of and becomes responsible for creating and editing an instance of such a schema. Note that some forms of automated discovery of resources and exporting them to the schema instance might be provided, however the data centre operators are responsible for validating their configuration.
Given the complexity and heterogeneity of the data centre infrastructure, we derive the data centre schema by decomposing the modelling process into 5 phases: ICT resources, network topology, server, storage, and services modelling.
ICT resources modelling
Inside each data centre, there are ICT equipment which can be organised either inside “Rack”s or in independent cases such as single box stands, generally with tower form factor (“Tower Server”), in addition to box-like network devices such as routers and switches (“Box Network” class). “Framework Capabilities” class describes energy-related controlling capabilities of the management and automation tools available to the data centre to be managed energy-wise. The term controlling capabilities refers to all possible actions (e.g. power off/on equipment, migrate software load, etc.) applied to the data centre ICT resources and carried through by the framework. As a matter of fact, in the rest of this paper frameworkID attribute is used by every class that needs to identify its corresponding controlling capabilities.
A rack is a framework used typically to hold several different ICT resources: rack-mountable servers are represented by the “Rackable Server” class, enclosures to host blade form factor servers (“Blade Server” class) are depicted by the “Enclosure” class. Typically “Tower Server”s and “Rackable Server”s have independent Power Supply Units (“PSU” class) and cooling system (“Cooling System” class), whereas “Blade Server” shares the PSU and cooling system from its enclosure. Note that all these three classes representing servers are generalisations of a single parent “Server” class (see Section Server modelling). Furthermore, racks contain typically a set of Power Distribution Units (“PDU” class): in most cases they are passive devices simply used to connect the different power plugs of the rack elements; in some other cases they can also be active and perform power measurements and switch on/off functions. Storage Area Network devices (“SAN” class) are generally mounted inside racks and have independent Power Supply Units and cooling system. SAN is a dedicated device that provides network access to consolidated, block level storage. Finally, network devices such as routers and switches can also be mounted inside racks (“Rackable Network” class) whose specifics are explained in Section Network topology modelling. Note that in the rest of this paper measuredPower and computedPower optional attributes in most of the classes are used in order to record the power consumption of the corresponding resource obtained respectively through a dedicated power meter and power consumption prediction models of Section Power consumption prediction models. These two parameters serve for the model refinement as they are useful to compare measured and computed values to check and refine the power consumption prediction models. Finally, powerIdle and powerMax are respectively used to denote the idle and the fully active power consumption of the corresponding class.
Network topology modelling
As illustrated, the “Flow” class describes an end-to-end communication occurring within the data centre or between one network node within the data centre and another node on an external network (for example the Internet). This class’ attributes include the communication end points, which can be expressed by the source and destination Network Node addresses. The “Flow” class also includes an attribute bandwidth required by a communication which provides an indication of the expected flow throughput for traffic engineering purposes.
A “Network Node” is an abstract class, which represents the entities defined as routers, switches, servers, and so on. Each “Network Node” can have a number of communication ports (“Network Port” class), however a port can only be associated to a single “Network Node”. Moreover, each network port is associated to a class “Link” that connects this port to another port in the network. The class “Network Node” possesses of the following attributes: processingBandwidth refers to the maximum number of packets that can be processed by the node per second. The processing bandwidth can be usually found in specification data sheets or can be measured. forwardFlag indicates whether the node is able to forward packets and is used to differentiate end hosts from routers and switches. Note that each “Network Node” is equipped with Power Supply Units (“PSU” class) and cooling system (“Cooling System” class) such as fans whose energy-related attributes are presented in Section Server modelling.
The class “Network Port” defines a port on the “Network Node”, which can be any of the variants as depicted in Figure4: e.g. Serial PPP, VPN, ATM, e80211x, Ethernet, Optical FDDI and Tunnel. The most relevant energy-related attributes of the “Network Port” class are the followings: lineCapacity denotes the nominal transmission rate of the port (typical values are 10 Mbps for Ethernet, 100 Mbps for Fast Ethernet, and 1 Gbps for Gigabit Ethernet), whereas portID provides a unique identifier to the port. powerMax, powerIdle and lineCapacity are used to capture the power consumption behaviour of the port. trafficIn and trafficOut represent respectively the packet throughput in and out of the port. bufferSize and bufferOccupancy describe all together buffer characteristics and management policies in use within the port that are needed to estimate QoS metrics. trafficIn and trafficOut are required to compute the actual power consumption based on the power consumption model. bufferSize and bufferOccupancy are used to compute the delay for the traffic forwarding to the corresponding port. Finally, the “Link” class models the propagation medium associated with its corresponding “Network Port”. Its attributes are the propagationDelay, which defines the time required to physically move a bit from two end points, and the bitErrorRate.
Typically, a server consists of a “Mainboard” and runs several software applications such as “Native Operating System” and “Native Hypervisor” (see Section Services modelling). The mainboard is the central printed circuit board in server computing system and holds many of the crucial components of the system, while providing connectors for other peripherals. Its memoryUsage attribute (opposite of free space) denotes the overall usage (in GB) of all the attached memory modules whose value should be kept up-to-date through the data centre’s monitoring system. The followings are the main components attached to the “Mainboard”: Central Processing Units (“CPU” class), Random Access Memories (“RAMStick”), Network Interface Cards (“NICs”), hardware RAIDs (“HardwareRAID”) and Storage Units (“StorageUnit”).
With the advent of modern processors, a “CPU” consists of more than one “Core” where each such core can have its own “Cache” depending on the level (e.g. Level 1). Furthermore, it is also possible that certain cores of a processor share the same cache (e.g. Level 3). The most relevant energy-related attributes of “CPU” are: architecture indicates the processor’s manufacturer (e.g. Intel, AMD, etc.) each having different power consumption behaviour, cpuUsage denotes the utilisation (load) of the processor whose value should be kept up-to-date through the data centre’s monitoring system. DVFS (Dynamic Voltage and Frequency Scaling) is an attribute used to indicate whether the corresponding server’s energy-saving mechanisms (e.g. Intel SpeedStep, AMD Cool’n’Quiet, etc.) are enabled or not. lithography and transistorNumber denote respectively the size in nanometres as well as the number of transistors (in the order of millions) of the processor which are used for idle power consumption prediction purposes.
Each “Core” operates on a frequency (in GHz), scaled dynamicallya between minimum and maximum ones (frequencyMin and frequencyMax), and voltage. coreLoad represents the utilisation of the corresponding core whose value should be kept up-to-date through the data centre’s monitoring system. totalPstate and lastPstate indicate respectively the total number of P-states (e.g. 2 or more) as well as the most recent P-state of the core (e.g. P0,P1, etc.). Note that these two parameters are used to estimate the frequency of a processor’s core for data centres whose monitoring system can not provide up-to-date dynamic values for this attribute. The implementation details of guessing the frequency are not covered in this paper.
The “RAMStick” class has several attributes relevant to power consumption estimation: voltage reflects the supply voltage under which the memory module operates which is highly dependent on the type (e.g. DDR1, DDR2, DDR3, etc.), whereas size and frequency indicate respectively the size (in GB) and frequency (in MHz) of the memory, vendor denotes the manufacturer (e.g. KINGSTON, HYNIX, etc.), bufferType shows the type of the memory module in terms of buffer technology (e.g. fully buffered, buffered, registered, and unbuffered). It is worthwhile to mention that values of all the above-mentioned attributes are provided by the manufacturer’s data sheet.
Several “Storage Unit”s can be attached to a “Server” either directly through its “Mainboard” or by means of a dedicated “Hardware RAID” device. Additional information regarding storage modelling is provided in Section Storage modelling. As mentioned previously, “Tower Server”s and “Rackable Server”s are equipped with their own Power Supply Units (“PSU” class) and cooling systems (“Cooling System”) which can be either a “Water Cooler” or a “Fan”. The followings are the most relevant energy-related attributes of a “PSU”: efficiency (in percentage) indicates the amount of loss of the power supplied to the components of the server, which is highly related to the load. Note that values of efficiency for the corresponding loads can be extracted from the manufacturer’s data sheet.
Finally, inside the “Fan” class, depth denotes the depth (in meter) of a fan, whereas maxRPM and powerMax indicate respectively the maximum rotations per minute and power consumption of the fan. All the above-mentioned attributes can be found inside manufacturer’s manuals. actualRPM shows the current rotation speed of the fan whose value should be kept up-to-date through the data centre’s monitoring system.
Left part of the UML class diagram of Figure6 illustrates the server storage modelling where the “Storage Unit” class represents the abstraction for all kinds of disk-like devices providing the physical storage for data. “Storage Unit”s can be directly connected to a “Server” through its “Mainboard” or by means of a “Hardware RAID” controller that provides the different levels of RAID support to servers. We consider both traditional disks with revolving platters (“Hard Disk” class) and solid state disk (“Solid State Disk”) as possible “Storage Unit” devices. Note that attributes for “Server” and “Mainboard” classes are described in Section Server modelling.
“Storage Unit” class’ energy-related attributes are the followings: maxReadRate and maxWriteRate indicate respectively the maximum read and write rates of the disk which are computed in terms of the transferred size per second (MB/s). The values for above-mentioned attributes can be extracted from the manufacturer’s data sheet. readRate and writeRate indicate respectively the actual read and write rates of the disk which are expressed in terms of MB/s as mentioned previously. The values for both of these attributes should be kept up-to-date through the data centre’s monitoring system.
Each “Hard Disk” has the following different energy-related attributes: rpm indicates the rotation per minute of the disk, platters denotes the number of platters, whereas AAM presents whether the hard disk is equipped with Automatic Acoustic Adjustment feature. For the “Solid State Disk”, powerByRead and powerByWrite denote respectively the power consumed by read and write operations. The distinction here is due to the fact that read and write operations in solid state disks have different power consumption behaviour.
“Storage Unit”s can also be attached logically to a “Server”. Such a functionality is provided by means of “Logical Unit”s abstraction of “SAN” devices called “LUN”, whose details are covered in Section SAN storage. Each “LUN” class has the following attributes: LUNRef is used to reference the corresponding logical unit of a SAN device, whereas readRate and writeRate have the same definition as for their counterpart of “Storage Unit” class.
A Storage Area Network (SAN) is a dedicated device that provides network access to consolidated, block level storage. SAN architectures are alternative to storing data on disks attached to servers or storing data on Network Attached Storage (NAS) devices connected through general purpose networks - which are using file-based protocols.
Right part of Figure6 illustrates the SAN devices UML class diagram. Typically, a “SAN” device consists of more than one (usually two) “PSU” and “Fan” for redundancy purposes, several fiber channel (FC) and Ethernet Network Interface Cards (“FiberchannelNIC” and “EthernetNIC” classes) and a set of “Storage Unit”s. Furthermore, “Storage Unit”s are logically consolidated through “Logical Unit”s abstraction such that a “Storage Unit” is a member of one and only one “Logical Unit”. “Server”s access a “Logical Unit” through a unique logical unit number reference (“LUN” class).
Each “SAN” class has the following energy-relevant attributes: networkTrafficIn and networkTrafficOut have the same definition as their counterparts trafficIn and trafficOut of Section Network topology modelling. On the other hand, RAIDLevel of “Logical Unit” class shows the level (e.g. RAID 0, 1, 5, 10, etc.) of the RAID being used with the corresponding logical unit. As a matter of fact, each logical unit can be considered as a separate RAID controller. Furthermore, stripeSize shows the size of the RAID protocol’s stripe. numberOfRead and numberOfWrite denote respectively the number of read and write operations performed per second. All the other attributes of “Logical Unit” class have the same definition as their counterparts of the “Storage Unit” class of Section Server storage.
Physical servers (“Server” class) execute the software structured and layered as depicted in Figure7. Above the hardware level, at start-up a physical server bootstraps either a traditional operating system (“Native Operating System”) or a virtualisation hypervisor (“Native Hypervisor”). Some virtualisation hypervisors need to run above an operating system (“Hosted Hypervisor”).
Both “Native Hypervisor” and “Hosted Hypervisor” can run “Virtual Machine”s, which implement a software equivalent environment of a hardware server (with some limitations). The actual power consumption of the “Virtual Machine” increases with the increase of required number of processing resources such as numberOfCPUs, actualCPUUsage (load imposed on the processor(s)), actualStorageUsage (size in GB), actualDiskIORate (MB/s), actualMemoryUsage (size in GB) and actualNetworkUsage (packets/s or MB/s) that is being used. Note that the values of these attributes should be kept up-to-date through the data centre’s monitoring system. The resourceClassID is used to point out to the required resources of the virtual machine in terms of CPU, memory, disk I/O, etc. This information is useful for the energy optimisation algorithms to ensure SLA about resource utilisation is not violated when moving the VM from one host to another. The frameworkID is used in order to discover the appropriate framework actions (e.g. migrate, pause, resume) applicable to the virtual machines. In cloud computing environment, each virtual machine typically belongs to a specific type which is identified by a unique name (e.g. m1.small, m1.medium, c1.large, etc). Each such type specifies the required resources of the virtual machine in terms of CPU, memory and disk. cloudVmType is used to identify the appropriate predefined virtual machine types whereas cloudVmImage indicates the installed operating system.
“Virtual machine”s typically boot a “Hosted Operating System”, which might contain – depending on the case – specialised drivers to operate on virtualised devices.
The UML class diagram of Figure7 describes “Operating System” class as a generalisation for a traditional native or hosted operating systems (OS), and for native hypervisors: they all share the “boot on ” relation with respect to a physical server or a virtual machine. systemRAMBaseUsage indicates the amount of memory allocated by the operating system. In addition, an operating system in general contains multiple “File System” types,), might support - inside the kernel - the software implementation of raid features (“Software RAID”) and the software implementation of network devices, for example virtual switches (“Software Network”). The power consumption of “File System” increases with the increase of fragmentation factor and with the decrease of the free available size with respect to the total capacity (size) as the disk needs more power to locate the position of the data. An important energy-related attribute for “Software Network” is switchFabricType as this indicates the type of the network device the software is emulating (switch or router).
The typical software packages (“Software Application” class) can run either on a “Native Operating System” and/or on a “Hosted Operating System”: some applications don’t have problems in any execution environment, while others are not able to run in virtualised mode, thus the distinction in the model. The actual power consumption of the “software application ” increases with the increase of number of processing resources such as NumberOfCPUs, actualCPUUsage (load imposed on the processor), actualStorageUsage (size in GB), actualDiskIORate (MB/s), actualMemoryUsage (size in GB) and actualNetworkUsage (packets/s or MB/s) that the application is using. Note that the values of these attributes should be kept up-to-date through the data centre’s monitoring system.
In this section, we introduce the power consumption prediction models of the most relevant ICT resources of data centres such as servers, storage devices and networking equipment (e.g. routers or switches). Note that such power consumption models are the cornerstone of energy optimisation algorithms by providing them with detailed insights regarding the power consumption of the aforementioned ICT resources in different workload deployment configurations.
The power consumption of a server is broken down into two parts: idle and dynamic. The former is computed while the server is idle with no activities, whereas the latter is calculated when the server is performing certain computations. As a matter of fact, it is necessary to model both aspects for different components of a server illustrated in Figure5, in order to have a deeper understanding on the power consumption.
It was shown in that processors are the most prominent contributors (about 40%) to the overall power consumption of servers. Furthermore, the power consumption of processors can be due to either its idle state (no utilisation) or dynamic state while carrying out certain computations. With the advent of multi-core processors (e.g. dual-core, quad-core, etc.) as well as their corresponding energy-efficient mechanisms (e.g. Intel SpeedStep, AMD Cool’n’Quiet), several techniques (e.g. Dynamic Voltage and Frequency Scaling - DVFS) were introduced that save energy especially when the processor is in idle or low utilisation states.
Idle power consumption
where α = 0.114312((VΩ)−1), β = 0.22835(Ω−1) and γ = 0.139204 () are the coefficients such that V and Ω denote respectively the voltage and resistance. It is worthwhile to note that these values are derived based on results obtained from a power meter while analysing a quad-core processor with energy-efficient mechanisms (e.g. DVFS) deactivated.
Values of δ i for Intel Processors
Intel Xeon dual-core E5502
Intel Xeon quad-core E5540
Intel Xeon hexa-core X5650
where is introduced in Equation (4).
Dynamic power consumption
where denotes the dynamic power consumption of the core i having a utilisation (load) of L i , whereas P max indicates the maximum power consumption due to 100% utilisation. It is worthwhile to note that Equation (6) is derived based on the well known linear utilisation-based model of for single-core processors.
where V max and f max denote respectively the voltage and frequency at maximum utilisation, whereas C eff  indicates the effective capacitance which includes the capacitance C and switching activity factor α0→1.
where is introduced in Equation (6).
In fact, there are certain factors which play a major role in reducing the overall power consumption of multi-core processors of Equation (8). Among those, the followings are two main techniques through which the power consumption of multi-core processors can be reduced: Energy saving mechanisms such as Intel SpeedStep and AMD Cool’n’Quiet decrease power utilisation of a core by controlling its clock speed and voltage dynamically. In the idle mode or when the utilisation (load) of a core (processor) is low, the clock speed is reduced to minimise its power dissipation. Resource sharing (e.g. L2-cache in case of certain Intel multi-core processors) reduces the power consumption. We believe that this assumption is true due to the fact that sharing L2-cache with other cores minimises the cache miss ratio. As a consequence, less communication takes place with the memory (e.g. to fetch new instructions), which contributes in reducing the total power consumption of cores.
The reduction factor δ ′ changes by modifying the frequency of the processor.
The number of active (a utilisation of more than 1%) cores of the processor has an impact on the reduction of power consumption.
Values of δ ′ based on Frequency and Number of 100% Loaded Cores
Intel Xeon quad-core E5540
Total power consumption
A Random Access Memory (RAM) consumes power when it is idle refreshing the ranks holding certain stored data as well as while accessing the memory ranks to perform either read or write operations. In this paper, we focus on Synchronous Dynamic RAM DDR3 technology due to the fact that most modern data centres’ (including our real-world testbed) servers are equipped with such type of memory modules.
where s i , f i and V i denote respectively the size (in GB), frequency (in MHz) and the voltage (volts) of a specific DDR3 memory module i, whereas c takes a value of 0.00013.
If the processor is in idle state performing no activity, then we assume that the memory modules are also in idle state (γ = 0).
- 2.If the processor is carrying out certain computations (utilissation of more than 1 %), then we adopt a probabilistic approach in modelling γ, such that the more total memory is in use, the higher in probability that a memory access is performed:(19)
where n and s i are defined in Equation (16) and memoryUsage is introduced in Section Server modelling.
Typically, the power consumption of a hard disk can be broken down into three major parts: startup, idle, and accessing modes, where each such mode has different power consumption behaviour. The disk is in startup mode when all of its mechanical and electrical components are activated. On the other hand, the disk is in idle mode when no activity (read or write) is carried out, whereas it is in accessing mode while performing read or write operations.
such that x,y,z∈[0,1] denote respectively the probability that the disk is in accessing, idle and startup modes, whereas P idle is the idle state power consumption provided by the manufacturer’s data sheet.
If the average operation size (MB/s) of reads and writes per second is zero (readRate = writeRate = 0), then we assume that the disk is in its idle mode (x = z = 0 and y = 1).
If the average operation size (MB/s) of reads and writes per second is not zero, then we adopt a probabilistic approach in modelling the mode changes such that:
If readRate > 0 and writeRate > 0, then,
If writeRate = 0, then,
If readRate = 0, then,
If 0 < y ≤ 0.3, then we set α = 0.9 and .
If 0.3 < y ≤ 0.6, then we set α = 0.5 and .
If 0.6 < y ≤ 1, then we set α = 0.1 and .
We can notice from the above equations that the more the hard disk is in idle mode (y ≃ 1), the higher is the probability that it will remain in standby and sleep states.
Network interface card
where is the channel utilisation (also known as the normalised link’s load). Both time periods and power values would depend on the particular network technology employed.
It is interesting to note that the choice of network technology could affect, to varying degrees, the utilisation of other computer system components and in particular processor (CPU). For example, in serial point-to-point (PPP) communications, the CPU is normally used to execute a significant number of communication-related operations (e.g. frame checking and protocol control). These operations can easily increase the dynamic power consumption of the CPU. On the other hand, embedded network implementations, such as InfiniBand, can move much of the communication work to the embedded architecture. To include this network-technology dependent behaviour into our model, consider parameter L i (CPU load) in Equation (6) as resulting from two components:, where and γρ correspond to non-network and network dependent CPU load respectively. Parameter γ(γ ≥ 0) models the impact of a given network technology on CPU load based on network utilisation ρ. Small γ values can account for the minimal impact that embedded network architectures could cause to CPU load whereas larger γ values could be used to model the higher CPU dependency of other network interfaces.
where P CPU , P RAM , P NIC , and P HDD are given respectively by Equations (10), (18), (23), and (21), whereas c is constant related to the mainboard’s own power consumption. Note that technically it is challenging to compute the power consumption of the mainboard. Hence, statistical values for c can be derived based on the server type (e.g. tower, rackable, and blade), which is reflected by means of powerIdle and powerMax attributes of the “Mainboard” class in Section Server modelling.
where P max and RP M max denote respectively the maximum power and rotations per minute of the fan whose values can be extracted, in addition to the depth d, from its manufacturer’s data sheet.
Power supply unit
- 1.If the data centre’s monitoring system provides information at the PSU level (measuredPower of “PSU” class in Section Data centre schema), then the power consumption is given by the following equation:
- 2.If the data centre’s monitoring system provides information only at the server level (measuredPower of “Server” class in Section Data centre schema), then we assume that this measured power of the server is evenly distributed among its n PSUs (having similar efficiency) providing power to the components, and compute the power consumption by the following equation:
- 3.If the data centre’s monitoring system provides no information neither at the server level nor at the PSU level, then we compute the power consumption by the following equation:
such that P Mainboard and P Fan are introduced in Equations (24) and (29) respectively, whereas n denotes the number of PSUs and e their efficiency (assuming that its identical for all the installed PSUs).
Server power consumption
- 1.If the server is of type Blade, then its power consumption is given by the following equation:(31)
- 2.If the server is of type Tower or Rackable, then its power consumption is given by the following equation:(32)
such that P Mainboard , P Fan and P PSU are respectively given by Equations (24), (29), and Section Power supply unit.
where n denotes the total number of installed hard disks whose idle power consumption is given by Equation (20), m and l indicate respectively the total number of Ethernet Network Interface Cards (NIC) and Fiber Channel NICs having an idle power of and given by manufacturer’s data sheet, whereas c is a constant value representing the idle power consumption due to the mainboard and its attached components other than those mentioned above. Statistical values for c can be configured by powerIdle attribute of the SAN devices introduced in Section SAN storage. It is worthwhile to note that most of the real-life cases, such SAN devices rarely go to sleep or standby modes. As a matter of fact, we set α = 1 and β = 0 for Equation (20).
such that Nb R i and Nb W i denote respectively the total number of read and write operations performed per second which are represented by numberOfRead and numberOfWrite attributes of “Logical Unit” class in Section SAN storage, whereas P HDD is the power consumption of the corresponding hard disk introduced in Equation (21). Since it is not possible for monitoring systems of data centres to provide accurate information regarding whether the last performed operation is read or write, then we adopt a probabilistic approach in Equation (36) by using the number of read and write operations performed per second in order to guess which operation has more dominance. Note that such a guess is important for RAID protocols since the number of involved hard disks differ from read to write.
where stripeSize denotes the size of the RAID protocol’s stripes specified by its level. The same methodology can be used in order to compute the number of involved hard disks w i for write operations. It is important to note that the different involved hard disks for a given operation are picked up randomly since this is purely RAID level’s protocol dependent. As a matter of fact, we assume that the hard disks attached to a given Logical Unit i have similar power consumption behaviour.
where γ ′ represents the power consumed by the equipment without workload and Φ(.) a function that determines the level of power used by a given packet switching throughput. The exact form of function Φ(.) will be determined by the implementation technology. In practical terms, the contribution of Φ(.) to the total power consumption of a regular (embedded) network equipment is expected to be small. However, it is also expected that this trend will change in the future as “greener” implementations are introduced.
Testbed environment and configuration
The testbedc under investigation provides a computational environment implementing cloud computing for Infrastructure as a Service (IaaS) platform. It is the most basic cloud service typology, where virtual infrastructure resources (e.g. CPU, memory, storage devices) are provided to users on dynamic and scalable basis. It is worthwhile to note that the testbed is based upon a Lab-grade infrastructure fully resembling (in a smaller scale) both the configuration and functional capabilities of actual production-grade IaaS implementations, being private or public.
Servers’ Hardware Configuration
Dual CPU, quad-core Intel Xeon E5540, 2.53 GHz 8 MB L3 cache
24 GB (6 x 4 GB DIMMs) DDR3
Two hot plug hard drives 2 x 300 GB
Dual-port 10 gigabit Ethernet adapterNC532m
Inside each enclosure, the servers are interconnected through an HP Virtual Connect Ethernet Module. The two racks, in turn, are interconnected through an external Ethernet switch. Power supply is bundled with the enclosure, through 6 high efficiency (90%) 1200W HP Common-Slot Power Supply Units. Cooling is provided by 6 HP Active Cool 100 fan units, also directly installed in the enclosures.
Finally, energy measurement is performed by an HP hardware component named iLO (Integrated Lights-Out), accessible through the Insight Control software suite. iLO offers the ability to read real-time electrical power consumption down to single server level.
The Cloud Controller (CC),
The Node Controller (NC),
The Power and Monitoring Collector (PMC),
The client (end user) system.
The Cloud Controller (CC) is the component hosting the core cloud management functions, i.e. it’s the application server (Front End) where the cloud web services actually reside. These services are triggered by any end user request, asking to activate or deactivate a set of computational resources, identifiable as virtual machines. Furthermore, the CC software is deployed onto a physical server, as typically done to duly keep under control the response time to client requests. This software runs on a Red Hat Enterprise Linux (RHEL) 5.5 operating system instance.
The five Node Controllers (NC) are the physical machines providing virtualised environment to cloud platform clients. They are the physical servers on which virtual machines are created and instantiated by the CC, initialised with the software image selected by the client within the service catalog, and finally made exclusively accessible to the requesting client for its own usage through the network. The instantiated virtual machines can be de-instantiated upon clients’ request containing a “terminate instance ” command, and their used resources are released by the CC. Both events (instantiation and de-instantiation of a virtual machine) are captured by the energy-aware plug-in which on its turn triggers certain optimisation algorithms to minimise the energy consumption of the testbed. The Node Controller software, like the Cloud Controller (CC), runs on a RHEL 5.5 operating system instance. The virtual machines instantiated in response to client requests are created and deployed by a XEN hypervisor, and typically host Linux images (e.g. Ubuntu, Suse, Red Hat, etc.).
The energy-aware plug-in (described briefly in Section Contributions and results) resides altogether on a dedicated virtual machine, running on the VMware ESX 4.0 hypervisor. The Power and Monitoring Collector (PMC) is implemented by a customised version of collectd. Collectd is an open source Linux daemon able to collect, transfer and store performance data of computers and network equipment. For our testbed, specific collectd agents have been developed and implemented, to interface with iLO and acquire power measurement data. Like the Cloud Controller, even the PMC is deployed on a physical server.
Finally, the client systems are emulated by a custom software tool, generating sequence of requests that faithfully replay the interaction among a group of observed users from a real life context and the cloud IaaS infrastructure. The client load simulation is deployed inside virtual machines running an Ubuntu image, whose execution is scheduled and coordinated by a custom component running on the same VMware node where the energy-aware plug-in is also deployed.
A cloud computing IaaS load is by definition fairly unpredictable in the sense that its instantaneous computational load fluctuates arbitrarily between zero and maximum available capacity of the physical resources. This unpredictability in load is due to accommodating requests coming from a group of users, without being constrained into a static planning of the infrastructural capacity. To this regard, finding a suitable testing methodology is challenging due to the lack of upfront clue on the actual usage pattern of the environment.
To overcome this methodological problem, the activities (over a period of 6 months) of a real cloud computing IaaS environment were traced and the system parameters of each active physical and virtual resource were monitored. Then from these observations, a collection of repeating test sequences and usage patterns were extracted that altogether provides an exhaustive representation of system states worth experimenting our optimisation algorithms and measuring their actual results. To this end, a custom workload simulator was designed and developed. This tool can generate a sequence of actions and direct them to the Cloud Controller (CC), creating the required workload snapshots in order to enact energy-aware optimisation algorithms and measure the achieved results with the best significance.
The testbed is equipped with a data logger component, storing all the details of the energy-aware plug-in activities, along with the measured energy and the corresponding timestamps. After the end of the proof of concept, the log files were extracted from the system, and carefully analysed to take out of them a perceptible track of the actual user activities performed and logged. As a final outcome of this hindsight, we obtained crisp and content-relevant activity profiles of 7 different usage patterns. The chosen profiles span a sufficient timeframe and content to get a significant variance of activity profiles, and a sufficient amount of dynamical context changes (new activities and tasks, high load versus night timeframes). These profiles serve as the basis for designing and implementing the workload patterns enacted by the simulator tool whose details are covered next.
Steady tasks: spawn a virtual machine, and keep it running for a medium-long period of time, with a basically constant level of resource usage (e.g. CPU, memory, storage device); typical cases were complex software application development tasks.
Spiky tasks: spawn a virtual machine, intensively use it for a short term period (e.g. a quick debug on an application), then suddenly release it to the IaaS environment.
Rippling tasks: spawn a virtual machine, and keep it running for a medium-long period of time, with a fairly variant pattern of resource usage; this typology can be associated, for instance, to data management/reporting activities, or to some particular tasks ran in collaboration.
After identifying the above-mentioned three basic activity types, the next step was to configure the workload simulation tool in order to generate a realistic sequence of system actions (create and de-instantiate virtual machines) able to replicate as faithful as possible these recovered patterns.
Power consumption predictions
Before performing our tests related to the energy optimisation, it was necessary to validate the accuracy of the power consumption prediction models of Section Power consumption prediction models. To this end, we carried out observations both for the idle and dynamic power consumptions of the blade servers whose hardware configuration is presented in Table3.
Idle power consumption predictions
Idle Power Consumption Prediction Breakdown
Dynamic power consumption predictions
Set the server’s power management policy (e.g. performance, on-demand, etc.).
Measure power consumption from iLO with CPU in idle state.
While the CPU utilisation is less than 100%:
Increment by 20% the workload on the server.
Wait for a 10-minute period, to let the server and the power metering system to reach a stable situation.
Measure the power consumption.
Such a measurement was repeated while simulating also memory usage with the lookbusy tool, to assess the impact of memory usage on the power consumption.
Energy optimisation tests
Cloud computing being private or public is becoming more and more primordial in IT sector due to the numerous advantages (see Section Main focus) it gives to its end users. To cope with the high user demands, data centres having cloud computing style possess myriad of ICT resources. Most of the cases, this over-provision of resources, which serves to respect the Service Level Agreements that the data centres have with their end users, leads to astronomical numbers with respect to energy consumption. To this end, power and energy consumptions of data centres have become an issue recently due to economical and ecological reasons. In this paper, we study the case of a private cloud computing data centre from the energy efficiency perspectives, and show that there are incentives to save energy. To this respect, we described in the form of UML class diagrams the ICT resources with their most-relevant energy related attributes. Furthermore, we provided generic power consumption prediction models for servers, storage devices and network equipment. Note that our proposed methodology is generic enough so that it encompasses any computing style: traditional, cloud and super computing. Finally, in order to validate the energy optimisation policies of, we performed evaluations in a real-life private cloud computing data centre and showed that it’s possible to save energy almost 20% when only single-site is taken into account.
Minimising the energy consumption of data centres.
Minimising the CO2 emissions of data centres.
With respect to item (1), our current optimisation policies take into account minimizing the overall energy consumption of the data centre while not violating any of its SLAs. When item (2) is concerned, new optimisation policies should be devised that take into account the availability of green energy so that the overall CO2 emissions are minimised by taking advantage of the concept of data centre federation.
For the data centre businesses:
Reduction of costs and therefore prices.
Marketing options for green services.
Provision of potential energy legislation.
For the data centre end users: reduction of cost for services.
aBased on the required performance and energy saving needs.
cHewlett-Packard Italy Innovation Center located in Milan
eImplemented in the form of a virtual machine
Has been working at the University of Passau, Germany, as a Post Doctorial fellow since 2009. He holds an M.Sc. and Ph.D. in Computer Science from the University of Toulouse. His research interests are replication in distributed systems, mathematical modelling of systems, and energy-efficiency in large-scale systems. He participates in the EU funded projects ALL4Green, FIT4Green and EuroNF.
Hermann de Meer
Is currently appointed as Full Professor at the University of Passau, Germany, and as Honorary Professor at University College London, UK. He is director of the Institute of IT Security and Security Law (ISL) at the University of Passau. His main research interests include IT security and resilience, virtualisation and energy efficiency, complex and self-organising systems, peer-to-peer systems, quality of service and performance modelling, Internet protocols, home networking, and mobile computing. Hermann de Meer has led several nationally and internationally funded projects on Performance Modelling and Computer Networking. He currently holds several research grants funded by the Deutsche Forschungsgemeinschaft (DFG) and by the EU (FP6 and FP7).
Is a Research Fellow in the Intelligent Systems and Networks group at Imperial College London. He holds an M.Sc. and Ph.D. in Computer Science from the University of Central Florida, an M.Sc. in Telecommunications from Universidad Nacional de Ingenieria and a B.Sc. in Electronic Engineering from Universidad Ricardo Palma. His prior experience includes work as a Principal Network Engineer for industry and as a Visiting Assistant Professor at the University of Central Florida. His research is focused on self-aware networks in both the wired and wireless domains and other leading topics in computer networks and distributed systems. Dr. Lent has participated in many UK and EU funded research projects as a post-doctoral fellow and as a Co-PI, including UK EPSRC Self-Aware Networks and Quality of Service, EU FP6 CASCADAS, EU FP7 DIESIS, and EU FP7 FIT4Green.
Graduated in Electronics Engineering with honors in 1982 at Politecnico in Milan(Italy), he has over 25 years of working experience in R&D, IT Consulting and System Integration in major Computer Companies (Olivetti, Digital, Compaq, HP). He has covered various positions in SW Engineering until 1997, working on Operating System development and Manufacturing Framework products. He has been nominated Consulting Engineer by Digital Corporate Engineering Board in 1992. In parallel to his job in the industry, Giovanni Giuliani performed university teaching and research activities from 1988 to 1993 at Engineering and Computer Science universities in Milan and Como. Starting from 1998 - as Solution Architect in Consulting and System Integration organisation in Digital, and then, due to the mergers, Compaq and finally HP - he delivered several complex projects in Manufacturing, Government, Finance market, then exploring the area of Mobility for various sectors. Back in 2005 he joined the HP Italy Innovation Center, where he has been involved in several research projects funded by European Commission in the area of mobility, collaboration, cloud computing and Green-IT. Since 2009 he’s leading the HP Italy Cloud Computing Initiative, that has the goal of helping customers to understand Cloud Computing, evaluate possible impacts and benefits, “hands-on” experiment Cloud Computing technologies in a real lab, start Cloud Computing Proof of Concept and Pilot projects.
The research leading to these results was supported by the European Community’s 7 th Framework Programme in the context of the FIT4Green project.
- Newsroom G: Gartner Estimates ICT Accounts for 2 Percent of Global CO2 Emissions. 2007. Tech. rep., [http://www.gartner.com/it/page.jsp?id=503867]Google Scholar
- Hamilton J: Cooperative Expendable Micro-Slice Servers (CEMS): Low Cost, Low Power Servers for Internet-Scale Services. Tech. rep., Amazon Web Services http://mvdirona.com/jrh/talksandpapers/jameshamilton_cems.pdf Tech. rep., Amazon Web Services http://mvdirona.com/jrh/talksandpapers/jameshamilton_cems.pdfGoogle Scholar
- Vaquero LM, Rodero-Merino L, Caceres J, Lindner M: A break in the clouds: towards a cloud definition. SIGCOMM Comput Commun Rev 2009, 39: 50–55. 10.1145/1517480.1517491View ArticleGoogle Scholar
- Mell P, Grance T: The NIST Definition of Cloud Computing. 2009. Tech. rep., National Institute of Standards and Technology, Information Technology LaboratoryGoogle Scholar
- Quan DM, Basmadjian R, deMeer H, Lent R, Mahmoodi T, Sannelli D, Mezza F, Dupont C: Energy Efficient Resource Allocation Strategy for Cloud Data Centres. Proceedings of the 26th Int’l Symposium on Computer and Information Sciences (ISCIS 2011) 2011, 133–142.Google Scholar
- Vijaykrishnan N, Kandemir MT, Irwin MJ, Kim HS, Ye W: Energy-driven integrated hardware-software optimisations using SimplePower. Proceedings of the 27th annual international symposium on Computer architecture 2000, 95–106.Google Scholar
- Gurumurthi S, Sivasubramaniam A, Irwin MJ, Vijaykrishnan N, Kandemir M: Using complete machine simulation for software power estimation: the SoftWatt approach. Proceedings of Eighth International Symposium on High-Performance Computer Architecture 2002, 141–150.View ArticleGoogle Scholar
- Shafi H, Bohrer PJ, Phelan J, Rusu CA, Peterson JL: Design and validation of a performance and power simulator for PowerPC systems. IBM J Res Dev 2003, 47: 641–651.View ArticleGoogle Scholar
- Lewis A, Ghosh S, Tzeng NF: Run-time energy consumption estimation based on workload in server systems. Proceedings of the 2008 conference on Power aware computing and systems 2008.Google Scholar
- Economou D, Rivoire S, Kozyrakis C: Full-system power analysis and modeling for server environments. Workshop on Modeling Benchmarking and Simulation (MOBS) 2006.Google Scholar
- Berrendorf R, Ziegler H: The Performance Counter Library: A Common Interface to Access Hardware Performance Counters on Microprocessors. 1998. Tech rep, FZJ-ZAM-IB-9816:1–58, http://www2.fz-juelich.de/jsc/docs/printable/ib/ib-98/ib-9816.pdfGoogle Scholar
- West P: Core Monitors: Monitoring. 2008. Master’s thesis Master’s thesis The Florida State UniversityGoogle Scholar
- Fan X, Weber WD, Barroso LA: Power provisioning for a warehouse-sized computer. Proceedings of the 34th annual international symposium on computer architecture 2007.Google Scholar
- Saravana M, Govidan S, Lefurgy C, Dholakia A: Using on-line power modeling for server power capping. 2009.Google Scholar
- Zedlewski J, Sobti S, Garg N, Zheng F, Krishnamurthy A, Wang RY: Modeling Hard-Disk Power Consumption. Proceedings of the 2nd USENIX Conference on File and Storage Technologies 2003.Google Scholar
- Molaro D, Payer H, Moal DL: Tempo: Disk Drive Power Consumption Characterization and Modeling. IEEE 13th International Symposium on Consumer Electronics 2009, 246–250.Google Scholar
- Hylick A, Sohan R, Rice A, Jones B: An Analysis of Hard Drive Energy Consumption. IEEE International Symposium on Modeling, Analysis and Simulation of Computers and Telecommunication Systems 2008, 1–10.View ArticleGoogle Scholar
- Greenawalt P: Modeling power management for hard disks. Proceedings of 2nd International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems 1994, 62–66.Google Scholar
- Allalouf M, Arbitman Y, Factor M, Kat R, Meth K, Naor D: Storage modeling for power estimation. Proceedings of SYSTOR 2009: The Israeli Experimental Systems Conference 2009.View ArticleGoogle Scholar
- Sivathanu S, Ungureanu C, Liu L: Modeling the Performance and Energy of Storage Arrays. International Green Computing Conference 2010, 229–242.View ArticleGoogle Scholar
- Liu Z, Zhou J, Yu W, Wu F, Qin X, Xie C: MIND: A black-box energy consumption model for disk arrays. International Green Computing Conference and Workshops 2011, 0: 1–6.Google Scholar
- Wang HS, Peh LS, Malik S: A Power Model for Routers: Modeling Alpha 21364 and InfiniBand Routers. IEEE Micro 2003, 23: 26–35.View ArticleGoogle Scholar
- Ye TT, Benini L, Micheli GD: Analysis of Power Consumption on Switch Fabrics in Network Routers. Proceedings of the 39th Design Automation Conference (DAC) 2002, 524–529.Google Scholar
- Kahng AB, Li B, Peh LS, Samadi K: ORION 2.0: A Power-Area Simulator for Interconnection Networks. IEEE Transactions on Very Large Scale Integration (TVLSI) 2011. [http://hdl.handle.net/1721.1/67492] [http://hdl.handle.net/1721.1/67492]Google Scholar
- Hass J, Froedge J: Usage and Public Reporting Guidelines for The Green Grid’s Infrastructure Metrics PUE/DCiE. 2009. Tech. rep., The Green GridGoogle Scholar
- Meade RL, Diffenderfer R: Foundations of Electronics: Circuits & Devices. Clifton Park, New York; 2003. ISBN: 0–7668–4026–3 ISBN: 0-7668-4026-3Google Scholar
- ZES ZIMMER: 1 to 8 Channel Precision Power Analyzer LMG500 [http://www.zes.com/english/products/one-to-eight-channel-precision-power-analyzer-lmg500.html] [http://www.zes.com/english/products/one-to-eight-channel-precision-power-analyzer-lmg500.html]Google Scholar
- Chandrakasan PA, Brodersen RW: Minimizing Power Consumption in CMOS Circuits. 1995. Tech. rep., University of California at Berkeley. http://bwrc.eecs.berkeley.edu/publications/1995/\Min_pwr_consump_CMOS_crct/paper.fm.pdfGoogle Scholar
- van der Bijl HJ: Theory and Operating Characteristics of the Themionic Amplifier. Proceedings of the IRE (Institute of Radio Engineers) 1919, 97–126.Google Scholar
- HP Enclosure [http://h18000.www1.hp.com/products/blades/components/enclosures/c-class/c3000/?jumpid=reg_R1002_USEN] [http://h18000.www1.hp.com/products/blades/components/enclosures/c-class/c3000/?jumpid=reg_R1002_USEN]Google Scholar
- HP Blade [http://h10010.www1.hp.com/wwpc/us/en/sm/WF02d/3709945–3709945–3328410.html?jumpid=in_r2515_us/en/smb/psg/psc404redir-ot-xx-xx-/chev/] [http://h10010.www1.hp.com/wwpc/us/en/sm/WF02d/3709945-3709945-3328410.html?jumpid=in_r2515_us/en/smb/ psg/psc404redir-ot-xx-xx-/chev/]Google Scholar
- Basmadjian R, Bunse C, Georgiadou V, Giuliani G, Klingert S, Lovasz G, Majanen M: FIT4Green: Energy aware ICT Optimisation Policies. Proceedings of the COST Action IC0804 on Energy Efficiency in Large Scale Distributed Systems - 1st Year 2010.Google Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.