Views 
   PDF Download PDF Downloads: 977

 Open Access -   Download full article: 

A Lightweight Approach Multiplexing Resource Allocation Scheme by Virtualization based on Time Series in SOC

C.Saravanan1, C.Anuradha2

1PG Scholar, SCAD Engineering College, Tirunelveli.

2HOD, SCAD Engineering College, Tirunelveli.

Article Publishing History
Article Received on :
Article Accepted on :
Article Published : 02 Jul 2014
Article Metrics
ABSTRACT:

By leveraging virtual machine (VM) technology which provides performance and fault isolation, Cloud resources can be provisioned on demand in a fine-grained, multiplexed manner rather than in monolithic pieces. By integrating volunteer computing into Cloud architectures, we envision a gigantic Self-Organizing Cloud (SOC) being formed to reap the huge potential of untapped commodity computing power over the Internet. Towards this new architecture where each participant may autonomously act as both resource consumer and provider, we propose a fully distributed, VM-multiplexing resource allocation scheme to manage decentralized resources. Our approach not only achieves maximized resource utilization using the proportional share model (PSM), but also delivers provably and adaptively optimal execution efficiency. We also design

a novel multi-attribute range query protocol for locating qualified nodes. Contrary to existing solutions which often generate bulky messages per request, our protocol produces only one lightweight query message per task on the Content Addressable Network (CAN). It works effectively to find for each task its qualified resources under a randomized policy that mitigates the contention among requesters.This paper extends our previous work and proposes a lightweight mathematical model to estimate the energy cost of live migration of an idle virtual machine quantitatively. A series of experiments were conducted on KVM to profile the migration time and the power consumption during live migration. Based on these data we derived an energy cost model that predicts the energy overhead of live migration of virtual machines with an accuracy of higher than 90%.

KEYWORDS: Cloud Computing; VM-multiplexing Resource Allocation; Convex Optimization; P2P Multi-attribute Range-query; Lightweight Model

Copy the following to cite this article:

Saravanan C, Anuradha C. A Lightweight Approach Multiplexing Resource Allocation Scheme by Virtualization based on Time Series in SOC. Orient. J. Comp. Sci. and Technol;7(1)


Copy the following to cite this URL:

Saravanan C, Anuradha C. A Lightweight Approach Multiplexing Resource Allocation Scheme by Virtualization based on Time Series in SOC. Orient. J. Comp. Sci. and Technol;7(1). Available from: http://computerscijournal.org/?p=751


 INTRODUCTION

Cloud computing has emerged as a compelling paradigm for deploying distributed services. Resource allocation problem in Cloud systems emphasizes how to harness the multi-attribute resources by multiplexing operating systems. With virtual machine (VM) technology [1], we are able to multiplex several operating systems on the same hardware and allow task execution over its VM substrates without performance interference. Fine-grained resource sharing can be achieved as each VM substrate can be configured with proper shares of resources (such as CPU, memory, storage, network bandwidth) dynamically.

To mitigate the contention problem due to analogous queries in CAN, our range query protocol proactively diffuses resource indexes over the network and randomly route query messages among nodes to locate qualified ones that satisfy tasks’ minimal demands. To avoid possibly uneven load distribution and abrupt resource over-utilization caused by un-coordinated node selection process from autonomous participants, we investigate three node selection policies, namely double-check policy [21], queuing policy [22], and

extra-virtual-dimension policy [19].

The rest of the paper is organized as follows. We formulate the resource allocation problem in a VM-multiplexing environment in Section 2. In Section 3, we prove that optimal resource allocation does exist and show that our solution is optimal.

This paper proposes a lightweight model to quantify the energy cost of live migration of virtual machines. A series of experiments were conducted on KVM [15] and we derived such a model through linear regression on recorded data. The migration cost model is able to estimate the energy overhead of live migration of an idle virtual machine within an accuracy of 90%.This paper addresses a lightweight energy cost model that copes without a dedicated power or time model. The model is able to quantify energy overhead of live migration with respect to the RAM size of the virtual machine as well as with respect to the servers available network bandwidth. It completes the study which focuses on performance cost of live migration of virtual machines and estimates energy cost within an accuracy of higher than 90%.

2 OPTIMAL RESOURCE ALLOCATION

Given a task tij with its weight vector w(tij) and a budget B(tij), we first prove that the optimal resource allocation on a qualified node ps with its price vector b(ps) does exist.

Lemma 1: The optimal allocation (denoted r(tij )) exists iff (i.e., ⇐⇒) Inequalities (6) and (7) are met.

b(ps)T e(tij) B(tij) (6)

Proof:

To prove : (transport property of inequalities) If (tij ) exists, it must satisfy Inequalities (3) and (5), thus the InequalitiesΠ(tij) and (7) should hold.

To prove : (to satisfy Slater’s condition [23])

If b(ps)T e(tij)=B(tij) or e(tij)=a(ps), then e(tij) is a unique solution, which can be regarded as an optimal one. If b(ps)T e(tij) < B(tij) and e(tij)a(ps), other than e(tij), there must exist another better solution (denoted r(tij)) such that b(ps)T r(tij) < B(tij) and e(tij)r(tij) a(ps), thus rΠ(tij ) must exist according to Slater’s condition [23]. Similarly, if b(ps)T e(tij) < B(tij) and e(tij) Πa(ps), Slater’s condition can also hold by excluding the equations from the constraints (7).

We assume the qualified node ps that satisfies Inequalities (6) and (7) can be found based on tij ’s expected resource vector e(tij) by a resource discovery protocol (to be discussed in Section 5). Thus, we could rewrite the constraint (5) to be Inequality (8) and construct a Lagrangian function F1(r(tij) That is, the optimal resource vector rΠcould be found as long as we could satisfy the above conditions simultaneously.

In order to solve the above simultaneous equations and inequalities, there are two traditional candidate strategies: (1) brute-force method and

 (2) interior point

the method is converged with them,

If we replace Constraint (5) with Constraint (11), we could find an optimal solution through a few convex optimization steps. That is, via such a constraint relaxation, we could optimize the resource allocation for task tij on node ps without exhausting all 3R possible combinations like the brute-force method or worrying about the convergence problem in the interior point method.

3 POINTER-GOSSIPING CAN

Our resource allocation approach relies on the assumption that all qualified nodes must satisfy Inequalities (6) and (7) (i.e., Lemma 1). To meet this requirement, we design a resource discovery protocol, namely pointer-gossiping CAN (PG-CAN), to find these qualified nodes. We choose CAN [17] as the DHT overlay to adapt to the multi-dimensional feature.

Like traditional CAN, each node (a.k.a. duty node) under PG-CAN is responsible for a unique multi-dimensional range zone randomly selected when it joins the overlay. Fig. 2 (a) illustrates an example of CAN overlay network. Suppose there are 25 joined nodes, each taking charge of a single zone. If a new node (node 26) joins, a random point such as (0.6 Gflops, 0.55GB) will be generated and its zone will be set as the new zone evenly split along a dimension from the existing zone (node 25 in Fig. 2 (a)) that contains this point. If there is only one non-overlapped range dimension between two nodes (e.g. pi and pj ) and they are adjacent at this dimension, we call them neighbors 

to each other. Furthermore, if the non-overlapped range of pi is always no less than pj ’s, pi is called pj ’s positive neighbor and pj is called pi’s negative neighbor. In Fig. 2 (a), Node 9, 12 and 20 are positive neighbors of node 1.

Each duty node (such as D1) will cache state-update messages received from its neighbors, which are checked periodically and removed if outdated (i.e., beyond their TTL). In the meanwhile, it propagates its own identifier (such as IP) to a few randomly selected pointer nodes towards it negative direction. For those duty nodes containing valid state messages, we call them non-empty-cache nodes.

Basically, there are two manners to propagate the duty nodes’ identifiers (or pointers) backward – spreading manner (Fig. 3 (a)) and hopping manner (Fig. 3 (b)), thus the PG-CAN can also be split into two types, namely spreading manner based PG-CAN (SPG-CAN) and hopping manner based PG-CAN (HPG-CAN). In Fig. 3 (a), the duty node D1 sends a pointer-message containing D1’s identifier to its selected pointer nodes (such as D2 and D3), notifying them that D1 has records. Upon receiving the message, the pointer nodes (D2 and D3) will further gossip D1’s identifer to their negative direction pointer nodes along next dimension. In Fig. 3 (b), the identifer of any nonempty- cache node will be forwarded from pointer node to pointer node along each dimension. Obviously, the former results in fewer number of hops for message delivery, but its identifers cannot be diffused as widely as the latter’s. In fact, we can prove that the delay complexity of identifier delivery for the hopping manner is O(log2 n) (Theorem 5), so the hopping manner is likely to be better than the spreading manner (to be confirmed in our simulation).

CONCLUSIONS AND FUTURE WORK

This paper proposes a novel scheme (DOPS) for virtual resource allocation on a Self-Organizing Cloud (SOC), with three key contributions listed below.

  • Optimization of Task’s Resource Allocation Under User’s Budget: With a realistic monetary model, wepropose a solution which can optimize the task executionperformance based on its assigned resourcesunder the user budget. We prove its optimality usingthe KKT conditions in the convex-optimization theory.
  • Maximized Resource Utilization based on PSM: In order to further make use of the idle resources, we design a dynamic algorithm by combining the above algorithm with PSM and the arrival/completion of new tasks. This can give incentives to users by gaining an extra share of un-used resource without more payment. Experiments confirm achieving a super-optimal execution efficiency of their tasks is possible. DOPS could get an improvement on system throughput by 15%60% than the traditional methods used in P2P Grid model, according to the simulation.
  •  Lightweight Resource Query Protocol with Low Contention:

We summarize the resource searching request as two range query constraints, Formula (6) and Formula (7). We prove them to be the sufficient and necessary conditions for getting the optimal resource allocation. Experiments confirm the designed PGCAN protocol with light-weight query overhead is able to search qualified resources very effectively. So far, we have successfully built a prototype supporting live migration of VMs between any two nodes on the Internet (even though they are behind different NATs). In the future, we will study fault-tolerance support for a (DOPS-based, PG-CAN-enabled) SOC system; we will also conduct sensitivity analysis of how violation of our model assumptions would impact the optimal resource allocation.

In this paper we investigated the energy cost of live migration and analyzed the role of the VM size and the role of the available network bandwidth in the energy consumption of hosting servers. We migrated virtual machines of sizes between 800 MB and 1700 MB if network bandwidth is between 20 MBps and 100 MBps. We measured the migration time and the power consumption of source and destination host and compared these values with the power consumption of the hosts in an idle state to capture the energy cost of live migration. Based on the profiled data, we deduced a lightweight energy cost model through linear regression. Experimental results demonstrate that this model is able to estimate energy overhead of live migration of virtual machines with an accuracy of more than 90%.

REFERENCES

  1. J. E. Smith and R. Nair, Virtual Machines: Versatile Platforms For Systems And Processes. Morgan Kaufmann, 2005.
  2. D. Gupta, L. Cherkasova, R. Gardner, and A. Vahdat, “Enforcing performance isolation across virtual machines in xen,” Proc. Seventh ACM/IFIP/USENIX Int’l Conference on Middleware (Middleware’06), pp. 342–362, 2006.
    CrossRef
  3. M. Armbrust, A. Fox, R. Griffith, A. D. Joseph, R. Katz, A. Konwinski, G. Lee, D. A. Patterson, A. Rabkin, I. Stoica, and M. Zaharia, “Above the clouds: A berkeley view of cloud computing,”, Tech. Rep., UCB/EECS-2009-28, Feb 2009.
  4. D. P. Anderson, “Boinc: a system for public-resource computing and storage,” Proc. Fifth IEEE/ACM Int’l Workshop on Grid Computing, pp. 4–10, 2004.
    CrossRef
  5. P. Crescenzi and V. Kann, A compendium of NP optimization problems. [Online]. Available: ftp://ftp.nada.kth.se/Theory/Viggo-Kann/compendium.pdf
  6. O. Sinnen, Task Scheduling for Parallel Systems (Wiley Series on Parallel and Distributed Computing), Wiley-Interscience, 2007.14
    CrossRef
  7. O. H. Ibarra and C. E. Kim, “Heuristic algorithms for scheduling independent tasks on nonidentical processors,” J. ACM, vol. 24, pp. 280–289, April 1977.
    CrossRef
  8. X. Meng and et al., “Efficient resource provisioning in compute clouds via vm multiplexing,” Proc. seventh IEEE int’l conf. on Autonomic computing (ICAC’10), pp. 11–20, 2010.
    CrossRef
  9. J. Sonneck and A. Chandra, “Virtual putty: Reshaping the physical footprint of virtual machines,” Proc. Int’l HotCloud Workshop in conjunction with USENIX Annual Technical Conference, 2009.
  10. D. Gupta and et al., “Difference engine: Harnessing memory redundancy in virtual machines,” Proc. eighth Int’l USENIX Symp. On Operating Systems Design and Impl., pp. 309 – 322, 2008.
  11. S. Govindan, J. Choi, B. Urgaonkar, A. Sivasubramaniam, and A. Baldini, “Statistical profiling-based techniques for effective power provisioning in data centers,” in Proc. fourth ACM Conf. European Conf. on Computer systems (EuroSys’09), 2009, pp. 317–330.
    CrossRef
  12. M. Feldman, K. Lai, and L. Zhang, “The proportional-share allocation market for computational resources,” IEEE Trans. on Parallel and Distributed Systems, vol. 20, pp. 1075–1088, 2009.
    CrossRef
  13. S. Soltesz, H. Poetzl, M. E. Fiuczynski, A. Bavier, and L. Peterson, “Container-based operating system virtualization: a scalable, highperformance alternative to hypervisors,” Proc. second ACM Int’l European Conf. on Computer Systems (Euro’07), 2007, pp. 275–
    CrossRef
  14. L. Cherkasova, D. Gupta, and A. Vahdat, “Comparison of the three cpu schedulers in xen,” SIGMETRICS Perform. Eval. Rev., vol. 35, no. 2, pp. 42–51, 2007.
    CrossRef
  15. “The role of memory in vmware esx server 3: on line at: http://www.vmware.com/pdf/esx3 memory.pdf,” Tech. Rep.
  16. dm-ioband: online at http://sourceforge.net/apps/trac/ioband.
  17. S. Ratnasamy, P. Francis, M. Handley, R. Karp, and S. Shenker, “A scalable content-addressable network,” Proc. ACM Int’l Conf. on Applications, technologies, architectures, and protocols for computer communications (SIGCOMM’2001), pp. 161–172, 2001.
  18. I. Stoica, R. Morris, D. Karger, M. F. Kaashoek, and H. Balakrishnan, “Chord: A scalable peer-to-peer lookup service for internet applications,” Proc. ACM Int’l Conf. on Applications, technologies, architectures, and protocols for computer communications (SIGCOMM’ 2001), pp. 149–160, 2001.
    CrossRef
  19. J. S. Kim and et al., “Using content-addressable networks for load balancing in desktop grids,” Proc. 16th ACM Int’l Symp. On High Performance Distributed Computing (HPDC’07), pp. 189–198, 2007.
    CrossRef
  20. A. Leite, H. Mendes, L. Weigang, A. de Melo, and A. Boukerche, “An architecture for P2P bag-of-tasks execution with multiple task allocation policies in desktop grids,” Proc. IEEE Int’l Conf. Cluster Computing, pp. 1–11, Feb. 2011.
  21. Y. Drougas and V. Kalogeraki, “A fair resource allocation algorithm for peer-to-peer overlays,” Proc. 24th Int’l Conf. on Computer Communications (INFOCOM’05), pp. 2853–2858, 2005.
    CrossRef
  22. D. Gross and C. M. Harris, Fundamentals of Queueing Theory (Wiley Series in Probability and Statistics), Wiley-Interscience, Feb. 1998.
  23. S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge University Press, 2009.
  24. S. Di, C.-L. Wang, W. Zhang, and L. Cheng, “Probabilistic best-fit multi-dimensional range query in self-organizing cloud,” Proc. 40th IEEE Int’l Conf. on Parallel Processing, pp. 763–772, 2011
    CrossRef
  25. P. Barham, B. Dragovic, K. Fraser, S. Hand, T. Harris, A. Ho, R. Neugebauer, I. Pratt, and A. Warfield, “Xen and the art of virtualization,” in Proc. 19th ACM symp. on Operating systems principles (SOSP’03), 2003, pp. 164–177.
    CrossRef
  26. Peersim simulator: http://peersim.sourceforge.net.
  27. Google cluster-usage traces: online at http://code.google.com/p/googleclusterdata.
  28. C. A. Waldspurger, “Memory resource management in vmware esx server.” [Online]. Available: http://www.usenix.org/events/osdi02/tech/waldspurger.html
  29. J. P. Walters, V. Chaudhary, M. Cha, S. G. Jr., and S. Gallo, “A comparison of virtualization technologies for hpc,” Proc. 22nd Int’l IEEE Conf. on Advanced Information Networking and Applications (AINA’08), pp. 861–868, 2008.
    CrossRef
  30. W. K. Mark Jelasity and M. van Steen, “Newscast computing,” Vrije Universiteit Amsterdam, Tech. Rep., 2006.
  31. R. K. Jain, The Art of Computer Systems Performance Analysis: Techniques for Experimental Design, Measurement, Simulation and Modelling, John Wiley & Sons, April 1991.
  32. J. Cao, F. B. Liu, and C. Z. Xu, “P2pgrid: integrating p2p networks into the grid environment: Research articles,” vol. 19, no. 7, Chichester, UK: John Wiley and Sons Ltd., pp. 1023–1046, 2007
  33. H. Abbes, C. Cerin, and M. Jemni, “Bonjourgrid: Orchestration of multi-instances of grid middlewares on institutional desktop grids,” Proc. 23rd IEEE Int’l Symp. on Parallel & Distributed Processing (IPDPS’09), pp. 1–8, 2009
    CrossRef
  34. A. Rossi, A. Singh, and M. Sevaux, “A metaheuristic for the fixed job scheduling problem under spread time constraints,” Comput. Oper. Res., vol. 37, pp. 1045–1054, June 2010.
    CrossRef
  35. P. Switalski and F. Seredynski, “Generalized extremal optimization for solving multiprocessor task scheduling problem,” Proc. Seventh Int’l Conf. on Simulated Evolution and Learning, Berlin, Heidelberg: Springer-Verlag, 2008, pp. 161–169.
    CrossRef
  36. G. Singh, C. Kesselman, and E. Deelman, “A provisioning model and its comparison with best-effort for performance-cost optimization in grids,” in Proc. 16th ACM Symp. on High Performance Distributed Computing (HPDC’07), 2007, pp. 117–126.
    CrossRef
  37. Q. Zheng, H. Yang, and Y. Sun, “How to avoid herd: a novel stochastic algorithm in grid scheduling,” Proc. 15th ACM Int’l Symp. on High Performance Distributed Computing (HPDC’06), Los Alamitos, pp. 267–278, 2006.
  38. C. B. Lee and A. E. Snavely, “Precise and realistic utility functions for user-centric performance analysis of schedulers,” Proc. 16th ACM Int’l Symp. on High Performance Distributed Computing  HPDC’07), pp. 107–116, 2007.
    CrossRef

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.