Virtual machine placement in energy aware load balancer using fog classifier

Cloud datacenter carries huge volume of data and tasks which is allocating resources to multiple workstations. Most of the cloud services are operating service level agreement (SLA) placements. During execution datacenter emits carbon and makes the energy. So operation cost always consideration fact. We need to address this challenge by using energy aware load balancer. This load balancer can be fixed in Virtual Machines (VM) and Classifier is required for selecting VMs. Employing the VMs is very important factor so fog enabled services is required for distributed geo physical load balancer with energy efficiency. In this paper we propose offloading VM services and Fog classifier for load balancing the cloud services. Placing the VM from one host to another we use Host Load Balancer with Energy Aware placement algorithm. In this case dynamical cloud environment can be tested and compare the host results. This is empirical approach for place the VMs without compromising the users. The simulations are done by using CloudSim and TensorFlow is used of generating deep belief network model for preparing VM placement. Our proposed method achieves 96% energy efficiency with minimum migration cost. The results are compared with existing placement methods based on active host availability.


Introduction
Cloud Services are very important factor to provide online based services such as software, application, platforms, etc. Virtual machine is selected based on request and infrastructure requirements from the user.Nowadays ubiquitous services are used for various computing applications [1].Computing and storage services are playing extreme operations over the internet.The technological shirt is the major reason to enable convenient, on-demand and resource sharing operations.It is configurable computing paradigm with pay per usage model.Fog computing is the cloud edge level service to extend networking, storage and processing the clouds to the users at different resource utilization [2].
Clouds and Fogs are the employ of Virtual machine and the major focus for the VM is migration and replications.The above two factors are depended based on load balancing and energy efficiency.The major role is placing virtual machine and balancing the load in running conditions.Due to over machine maintenances and large volume of data processing work load for the VM is always in running stage.This factor is investigated by various literatures and we need to fix better relationship between VM workload [3] and number action users.In this case according to the result given Amazon web Service EC2 cloud [4] has number user varies means it has low value as 1% in intensive application and increases up to 60% CPU optimized load balancing conditions [1].
According the researchers point, various fog computing [5] problems and challenges are addressing in cloud applications.While implementing load balancer means we must consider two factors such as energy efficiency and placement of VMs [6].In terms of Quality of Service we need provide for latency service providers and multiple users can be accessed via fog servers.The energy efficiency is obtained from fog server placements, profiles and applications.The number of downloading [7], access specification [8], service level agreements [9], number of updates [10] and amount data pre-loading [11] are considers for fixing VM with good energy efficiency factor.
In this paper, we give energy aware load balancer framework fixing VMs using Fog classifier.In this case we optimize the cloud with better energy efficiency factor.It is integrated linear programming model with minimize total energy consumption and multifactor VM workloads [12].VMs workload can be measured by using number of user profiles and number nodes are in active stage.Using above two factors is considered for calculating data rate.Data rate is the major factor for fining energy efficiency once we achieve good and efficient energy means VM is placed in exact position [13].This paper organize as follows, section 2 explains various related works, section 3 gives proposed classifier with energy optimization, section 4 experiments various methods and section 4 gives conclusion and future scope.

Related work
Data centers are the highest deliverable server to distribute the service across the world.For implementing cloud application a small number to 1000 server is required.Wong et al. large amount of energy is required when the server is increased [14].The report is given by National Record Data Center US while setting server more than 90 billion kilowatt hour of electricity was used and every year it increased double the rate by 2021 annual report.So every year nearly 200 million tons of carbon pollution happened [15].
The geographical distributed data center sites have various cloud service providers such as Google, IBM, Microsoft and Amazon.Recently VMware provides good infrastructure computing resource sharing facility and multi user availability [16].Manikandan et al.Placing Virtual machine is tedious process we need strong cohesion factor which addresses the problem for clustering, optimization and scheduling the resources.Cloud provider faces load balancing issue it affects overall VMs performance [17].Yung et al. various selection processes is available for setting data center sites with respect to SLA and QoS factors.Each data center get power from different servers and it can be recorded by using on-site energy services [18].
The present system has coupling issue like cloud provider and cloud data center optimization.Normally both factors were affected by unbalancing scheduling and cloudlet from the VM [20].Dynamically services can be accessed from data centers and migration is also reducing the load optimization.Xiang et al. utilization of servers and data center is always increased to automatically energy consumption is also increased.So we need good VM placement to handle the data centers and servers.The survey from iMatrix on 2022, some countries are increased the carbon tax if the industries emits the carbon while using servers and it affect environment sustainability [20].
George and Thimmana et al. monitoring the energy consumption and balancing the load is major issue so it is directly affect the entire server and it creates high consumption factor.Researchers are calculating different weight ratio which occupied by workload and physical resource availability.Based on ratio we consider below parameter while setting VMs using energy aware load balancer i. Calculate energy consumption factor and work load, ii.Find the minimum critical occupied resources, iii.Cross validation index with respect to VM and iv.Number of resources are in active stage.Above consideration are taken as important and we provide good VM placement with respect to energy consumption factor.

Cloud-fog model
In this work, Fog classifier is proposed to handle VMs placement based on type, programming model and linearity resources.The Fig. 1 shows the layer architecture model for setting cloud optimization.Here data transmission can be done by using IP over wide area network technology, core volume of servers used and connected.Passive optical network connection is established with optical line terminal and optical network unit for data transmission.Cloud data centers are connected with IP network and Fog classifier extends the service via edges.In this work we considered fog nodes are the access specifiers to handle the resource based on clusters.
Based on above input the major problem with data centers is placement of VM.Most of computing it is not done properly due the active and dynamic energy utilization.So we need consider below input while designing Fog classifier.Step 5: Then the server is selected by number of utilization index and VM process Step 6: The task is classified based on execution time and waiting time from resource pool selection.Round Robin policy is taken for allocating VM using scheduled manner Step 7: If the process is completed note the time taken for execution and energy consumption factor and reallocated the VM as empty In this paper we proposed three VM placement methods are experimented by referring energy consumption cost.Factor 1: Verify the physical resource factors such as power supply unit, cooling, UPS backup and lighting, Factor 2: Data center availability, Factor 3: Distributing load balancing unit among the data centers, Factor 4. Renewable energy resources.
From the above Fig. 2, VM placement is done in data centers with respect to all geographical areas considered overhead energy as less, optimized server and footprint of each resource allocation.VM placement can be done by using maximise the renewable energy utilization and minimum total cost.In this method we sort the entire host in descending order based on utilization factor.If utilization is less means we consider that host as weak and we used that for overloading conditions.This process can be repeated while fixing good placement of VM.Here we considered detecting under loaded hosts, host utilization factor, data center workloads and vacated VMs.Existing method also considered for targeting the host.In this case we propose Fog classifier for selecting VM and host optimization.

Fog classifier-host load balancer with energy aware placement algorithm
The prediction of number of host and fetch the lower threshold values.In this case fog classifier divides the host into lower threshold hosts and higher threshold hosts.So VM can select based on minimum changes based on host availability (Table 1).
Cloud users can submit the VM Request to cloud broker and select the VM values for the user so this case the time can be obtained as.
VMi = (Type of Service, Hold_Time) the type selected by Amazon EC2 VM instance and running out measured by using first come first serve basis.
Power Consumption in Data Center -Data center connects the power from UPS or Networking devices.This is distributed to all the data center components and uses the less energy consumption (Table 2).The cost of energy can be obtained as follows where Po -Overhead Power and Utilized power obtained from data centers.

Experimental setup
Our proposed system is evaluated at Infrastructure as a service (Iaas) cloud environment using Amazon EC2 Cloud VM and CloudSim toolkit is used to test real time environments.It is large scale platform so evaluation can be done virtualized load balancer model.Here Fog classifier is added the features such as cluster the VMs, Host List, VM Allocation, Minimum power consumption factor and overhead values.The below Fig. 3 shows that VM Machine server with types.We designed the data centers with Data center utilization, Available energy, Energy aware request from server and cloud information set.We are operating the data center we used hypervisor created by virtual machine with memory, bandwidth, CPU shown in Tables 3 and 4.

VMs placement
We used optimal placement using Data Center selection using core network topology consist of 30 nodes and 120 bidirectional link.In below Fig. 4 Core network topology from CISCO Network index representation.Here

CloudSim experimental inputs
From above Table 5, experiments are set in cloudsim and apply our proposed algorithm for evaluating cloud.Here Fog classifier classify the cloud based on usage and availability.Proposed method is allocate maximum number of task to active host.The energy consumption is propotional to the action host in the dataset.If we increase the number of host means result will be decreased or shutdown the data center.
Based on above Table 6 and Fig. 5, VM Types tested by using various data center and number of host.In this case the energy consumption is below 35% and accuracy index average in 96% achieved.Also each stage number VMs are shutting down based on number of VM migrated.From this result our proposed algorithm is compared with existing VM placement and accuracy index values using TensorFlow.TensorFlow is the comparison tool to compare the results with various representations.
Based on above Table 7, the proposed method is compared with existing method virtualized load balancer and decision tree index methods with selecting VM Types, Data center and number of host.In this case comparison is taken as accuracy and number data centers are shutdown.From the two factors our proposed

Table 2
Conditions and task allocation in VMs Select the number of Cloudlet task from resource pool Step 2: Obtain the VM_List and request send to cloud broker Step 3: Check the information set whether VM is available, if available request send to cloud data center and allocate VM otherwise consume the amount energy and cost Step 4: Data center is selected means record and monitor the energy if it is increase means fog classifier classify the node and allocate another VM based on availability

Table 3
Data center selection based on VM request from resource pool

Table 4
Energy aware consumption results after allocating VMs based on user requests Core network topology VM allocation using active users VM Popularity based on User Downloading rate {1,5,10,25,50,100}% Fig.4

Table 5
Data center, server and VM characteristics for experiments

Table 6
Experimental result of VMs placement and accuracy index