Please use this identifier to cite or link to this item: https://idr.l4.nitk.ac.in/jspui/handle/123456789/14490
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorB, Annappa-
dc.contributor.authorRaghunath, Bane Raman-
dc.date.accessioned2020-08-28T11:25:31Z-
dc.date.available2020-08-28T11:25:31Z-
dc.date.issued2019-
dc.identifier.urihttp://idr.nitk.ac.in/jspui/handle/123456789/14490-
dc.description.abstractCloud computing has become more popular in recent years. Information Technology industries and individual users are attracted towards cloud computing as they can get required number of resources from it. Cloud computing basically provides Infrastructure-as-a-Service (IaaS), Software-as-a-Service (SaaS) and Platform-as-aService (PaaS). Companies such as Google, Microsoft and Amazon, host large datacenters networked with high end computer systems and made available to users on rent. These users may be an individual researcher, an organization or a company. As datacenters are heavily used by many clients and workloads are of di erent types varies in length and consumption of the resources , allocation of underlying resources is the most important issue for its e cient utilization. In most of the large-scale datacenters virtualization is the technology used for resource sharing among di erent applications running on Virtual Machines (VMs) created on the same Physical Machine (PM). Virtual Machine Monitor (VMM) provides resource isolation among co-located VMs. However, this resource isolation does not provide performance isolation between VMs. Resource scaling is the important property of virtualization. Elastic auto-scaling is the need of the day and studies show that most of the existing data center infrastructure resulted in either over-provisioning or under-provisioning. It necessitates, on-demand resource allocation to individual VMs from the physically shared pool of resources as per their dynamic requirements to satisfy the Service Level Agreements (SLA) between the customer and cloud provider. Hence, it is necessary to predict the resource requirements periodically and well in advance. Most of the prediction techniques presented in the literature are useful with a particular type of workload. Hence, it is necessary to analyze which one should be used depending on the type of workload. Most of the studies are concentrated on local resource allocation. When resource de ciency is present we can think of remote iallocation as most of the VMMs provide live VM migration facility. As VM migration process itself is a resource consuming process, its e ects on other running VMs have to be studied and the VM for migration has to be selected accordingly. This thesis presents an architecture for dynamic on-demand resource allocation using statistical machine learning techniques. The resource allocation controller allocates the resources locally on the same PM or remotely through a live VM migration on another PM. The need for migration is determined in advance so that it triggers the migration when, su cient number of resources are available. The migration manager selects VM for migration which produce less interference to other running VMs at a less migration cost. This migration is done without a ecting the performance of the applications running on migrating VM. The prevalent approaches are manual or automatic and all of these are reactive approaches where action will be taken after speci c situation is detected. Hence it experiences the unavailability of required number of resources until the action is taken. The proposed approach is proactive, hence the su cient number of resources are available even at peak time. Experiments are carried out with synthetic and real application workloads. Prediction of future requirement is done with fuzzy prediction system and Recurrent Neural Networks (RNN) with Long-Short Term Memory (LSTM). The workload prediction accuracy is received with Mean Absolute Error (MAE) of 0.056. The type of the workload is identi ed with the help of chaos indicator designed to decide which particular prediction technique is used. Scaling of the CPU and network resources is done automatically in accordance with the dynamically changing workload at a minimum granularity of 2 seconds with savings in the resources as compared to static allocation. It has been found that the proposed scheme allocates resources as per their dynamic requirements with minimum di erence between actual requirements and allocation. The resource saving with proposed method is around 30-50% as compared to static allocations. The resource underestimation errors due to spikes in the workload are minimized. The performance improvement in terms of response time of an application is around 15-20% as compared to other methods because of proper selection of VM for migration by the migration manager.en_US
dc.language.isoenen_US
dc.publisherNational Institute of Technology Karnataka, Surathkalen_US
dc.subjectDepartment of Computer Science & Engineeringen_US
dc.titlePrediction based Dynamic Resource Allocation in Virtualized Environmentsen_US
dc.typeThesisen_US
Appears in Collections:1. Ph.D Theses

Files in This Item:
File Description SizeFormat 
CS12F02.pdf2.5 MBAdobe PDFThumbnail
View/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.