295 research outputs found
Critical analysis of vendor lock-in and its impact on cloud computing migration: a business perspective
Vendor lock-in is a major barrier to the adoption of cloud computing, due to the lack of standardization. Current solutions and efforts tackling the vendor lock-in problem are predominantly technology-oriented. Limited studies exist to analyse and highlight the complexity of vendor lock-in problem in the cloud environment. Consequently, most customers are unaware of proprietary standards which inhibit interoperability and portability of applications when taking services from vendors. This paper provides a critical analysis of the vendor lock-in problem, from a business perspective. A survey based on qualitative and quantitative approaches conducted in this study has identified the main risk factors that give rise to lock-in situations. The analysis of our survey of 114 participants shows that, as computing resources migrate from on-premise to the cloud, the vendor lock-in problem is exacerbated. Furthermore, the findings exemplify the importance of interoperability, portability and standards in cloud computing. A number of strategies are proposed on how to avoid and mitigate lock-in risks when migrating to cloud computing. The strategies relate to contracts, selection of vendors that support standardised formats and protocols regarding standard data structures and APIs, developing awareness of commonalities and dependencies
among cloud-based solutions. We strongly believe that the implementation of these strategies has a great potential
to reduce the risks of vendor lock-in
Enhancing Federated Cloud Management with an Integrated Service Monitoring Approach
Cloud Computing enables the construction and the provisioning of virtualized service-based applications in a simple and cost effective outsourcing to dynamic service environments. Cloud Federations envisage a distributed, heterogeneous environment consisting of various cloud infrastructures by aggregating different IaaS provider capabilities coming from both the commercial and the academic area. In this paper, we introduce a federated cloud management solution that operates the federation through utilizing cloud-brokers for various IaaS providers. In order to enable an enhanced provider selection and inter-cloud service executions, an integrated monitoring approach is proposed which is capable of measuring the availability and reliability of the provisioned services in different providers. To this end, a minimal metric monitoring service has been designed and used together with a service monitoring solution to measure cloud performance. The transparent and cost effective operation on commercial clouds and the capability to simultaneously monitor both private and public clouds were the major design goals of this integrated cloud monitoring approach. Finally, the evaluation of our proposed solution is presented on different private IaaS systems participating in federations. © 2013 Springer Science+Business Media Dordrecht
An Agent Architecture for Concurrent Bilateral Negotiations
Abstract. We present an architecture that makes use of symbolic decision-making to support agents participating in concurrent bilateral negotiations. The architecture is a revised version of previous work with the KGP model [23, 12], which we specialise with knowledge about the agent’s self, the negotiation opponents and the environment. Our work combines the specification of domain-independent decision-making with a new protocol for concurrent negotiation that revisits the well-known alternating offers protocol [22]. We show how the decision-making can be specialised to represent the agent’s strategies, utilities and prefer-ences using a Prolog-like meta-program. The work prepares the ground for supporting decision-making in concurrent bilateral negotiations that is more lightweight than previous work and contributes towards a fully developed model of the architecture
On the Benefits of Transparent Compression for Cost-Effective Cloud Data Storage
International audienceInfrastructure-as-a-Service (IaaS) cloud computing has revolutionized the way we think of acquiring computational resources: it allows users to deploy virtual machines (VMs) at large scale and pay only for the resources that were actually used throughout the runtime of the VMs. This new model raises new challenges in the design and development of IaaS middleware: excessive storage costs associated with both user data and VM images might make the cloud less attractive, especially for users that need to manipulate huge data sets and a large number of VM images. Storage costs result not only from storage space utilization, but also from bandwidth consumption: in typical deployments, a large number of data transfers between the VMs and the persistent storage are performed, all under high performance requirements. This paper evaluates the trade-off resulting from transparently applying data compression to conserve storage space and bandwidth at the cost of slight computational overhead. We aim at reducing the storage space and bandwidth needs with minimal impact on data access performance. Our solution builds on BlobSeer, a distributed data management service specifically designed to sustain a high throughput for concurrent accesses to huge data sequences that are distributed at large scale. Extensive experiments demonstrate that our approach achieves large reductions (at least 40%) of bandwidth and storage space utilization, while still attaining high performance levels that even surpass the original (no compression) performance levels in several data-intensive scenarios
Evolutionary approaches to signal decomposition in an application service management system
The increased demand for autonomous control in enterprise information systems has generated interest on efficient global search methods for multivariate datasets in order to search for original elements in time-series patterns,
and build causal models of systems interactions, utilization dependencies, and performance characteristics. In this context, activity signals deconvolution is a necessary step to achieve effective adaptive control in Application Service Management. The paper investigates the potential of population-based metaheuristic algorithms, particularly variants of particle swarm, genetic algorithms and differential
evolution methods, for activity signals deconvolution when the application performance model is unknown a priori. In our approach, the Application Service Management System is treated as a black- or grey-box, and the activity signals deconvolution is formulated as a search problem, decomposing time-series that outline relations between action signals and utilization-execution time of resources. Experiments are conducted using a queue-based computing system model as a test-bed under different load conditions and search configurations. Special attention was put on high-dimensional scenarios, testing effectiveness for large-scale multivariate data analyses that can obtain a near-optimal signal decomposition solution in a short time. The experimental results reveal benefits, qualities and drawbacks of the various metaheuristic strategies selected for a given signal deconvolution problem,
and confirm the potential of evolutionary-type search to
effectively explore the search space even in high-dimensional cases. The approach and the algorithms investigated can be useful in support of human administrators, or in enhancing the effectiveness of feature extraction schemes that feed decision
blocks of autonomous controllers
CloudSim Express: A Novel Framework for Rapid Low Code Simulation of Cloud Computing Environments
Cloud computing environment simulators enable cost-effective experimentation
of novel infrastructure designs and management approaches by avoiding
significant costs incurred from repetitive deployments in real Cloud platforms.
However, widely used Cloud environment simulators compromise on usability due
to complexities in design and configuration, along with the added overhead of
programming language expertise. Existing approaches attempting to reduce this
overhead, such as script-based simulators and Graphical User Interface (GUI)
based simulators, often compromise on the extensibility of the simulator.
Simulator extensibility allows for customization at a fine-grained level, thus
reducing it significantly affects flexibility in creating simulations. To
address these challenges, we propose an architectural framework to enable
human-readable script-based simulations in existing Cloud environment
simulators while minimizing the impact on simulator extensibility. We implement
the proposed framework for the widely used Cloud environment simulator, the
CloudSim toolkit, and compare it against state-of-the-art baselines using a
practical use case. The resulting framework, called CloudSim Express, achieves
extensible simulations while surpassing baselines with over a 71.43% reduction
in code complexity and an 89.42% reduction in lines of code
An exploration of the determinants for decision to migrate existing resources to cloud computing using an integrated TOE-DOI model
Migrating existing resources to cloud computing is a strategic organisational decision that can be difficult. It requires the consideration and evaluation of a wide range of technical and organisational aspects. Although a significant amount of attention has been paid by many industrialists and academics to aid migration decisions, the procedure remains difficult. This is mainly due to underestimation of the range of factors and characteristics affecting the decision for cloud migration. Further research is needed to investigate the level of effect these factors have on migration decisions and the overall complexity. This paper aims to explore the level of complexity of the decision to migrate the cloud. A research model based on the diffusion of innovation (DOI) theory and the technology-organization-environment (TOE) framework was developed. The model was tested using exploratory and confirmatory factor analysis. The quantitative analysis shows the level of impact of the identified variables on the decision to migrate. Seven determinants that contribute to the complexity of the decisions are identified. They need to be taken into account to ensure successful migration. This result has expanded the collective knowledge about the complexity of the issues that have to be considered when making decisions to migrate to the cloud. It contributes to the literature that addresses the complex and multidimensional nature of migrating to the cloud
Cloud e-learning for mechatronics: CLEM
his paper describes results of the CLEM project, Cloud E-learning for Mechatronics. CLEM is an example of a domain-specific cloud that is especially tuned to the needs of VET (Vocational, Education and Training) teachers. An interesting development has been the creation of remote laboratories in the cloud. Learners can access such laboratories to support their practical learning of mechatronics without the need to set up laboratories at their own institutions. The cloud infrastructure enables multiple laboratories to come together virtually to create an ecosystem for educators and learners. From such a system, educators can pick and mix materials to create suitable courses for their students and the learners can experience different types of devices and laboratories through the cloud. The paper provides an overview of this new cloud-based e-learning approach and presents the results. The paper explains how the use of cloud computing has enabled the development of a new method, showing how a holistic e-learning experience can be obtained through use of static, dynamic and interactive material together with facilities for collaboration and innovation
- …
