37 research outputs found
Optimizing monitorability of multi-cloud applications
When adopting a multi-cloud strategy, the selection of cloud providers where to deploy VMs is a crucial task for ensuring a good behaviour for the developed application. This selection is usually focused on the general information about performances and capabilities offered by the cloud providers. Less attention has been paid to the monitoring services although, for the application developer, is fundamental to understand how the application behaves while it is running. In this paper we propose an approach based on a multi-objective mixed integer linear optimization problem for supporting the selection of the cloud providers able to satisfy constraints on monitoring dimensions associated to VMs. The balance between the quality of data monitored and the cost for obtaining these data is considered, as well as the possibility for the cloud provider to enrich the set of monitored metrics through data analysis
Adapting a HEP Application for Running on the Grid
The goal of the EU IST int.eu.grid project is to build middleware facilities which enable the execution of real-time and interactive applications on the Grid. Within this research, relevant support for the HEP application is provided by Virtual Organization, monitoring system, and real-time dispatcher (RTD). These facilities realize the pilot jobs idea that allows to allocate grid resources in advance and to analyze events in real time. In the paper we present HEP Virtual Organization, the details of monitoring, and RTD. We present the way of running the HEP application using the above facilities to fit into the real-time application requirements
THE ATLAS EXPERIMENT ON-LINE MONITORING AND FILTERING AS AN EXAMPLE OF REAL-TIME APPLICATION
The ATLAS detector, recording LHC particles’ interactions, produces events with rate of40 MHz and size of 1.6 MB. The processes with new and interesting physics phenomena arevery rare, thus an efficient on-line filtering system (trigger) is necessary. The asynchronouspart of that system relays on few thousands of computing nodes running the filtering software.Applying refined filtering criteria results in increase of processing times what may lead tolack of processing resources installed on CERN site. We propose extension to this part ofthe system based on submission of the real-time filtering tasks into the Grid
Scaling evolutionary programming with the use of apache spark
Organizations across the globe gather more and more data, encouraged by easy-to-use and cheap cloud storage services. Large datasets require new approaches to analysis and processing, which include methods based on machine learning. In particular, symbolic regression can provide many useful insights. Unfortunately, due to high resource requirements, use of this method for large-scale dataset analysis might be unfeasible. In this paper, we analyze a bottleneck in the open-source implementation of this method we call hubert. We identify that the evaluation of individuals is the most costly operation. As a solution to this problem, we propose a new evaluation service based on the Apache Spark framework, which attempts to speed up computations by executing them in a distributed manner on a cluster of machines. We analyze the performance of the service by comparing the evaluation execution time of a number of samples with the use of both implementations. Finally, we draw conclusions and outline plans for further research
Interoperabilność narzędzi monitorujących
Networking, distributed and grid computing have become the commonly used paradigms of programming. Due to the complicated nature of distributed and grid systems and the increasing complexity of the applications designed for these architectures, the development process needs to be supported by different kinds of tools at every stage of a development process. In order to avoid improper influences of one tool to another these tools must cooperate. The cooperation ability is called interoperability. Tools can interoperate on different levels, from exchanging the data in common format, to a semantical level by executing some action as a result of an event in another tool. In this paper we present some interoperability models, with focus on their advantages and major problems due to their use. We also present an interoperability model designed and used in the JINEXT extension to OMIS specification, intended to provide interoperability for OMIS-compliant tools.Przetwarzanie rozproszone i gridowe jest obecnie dominującym paradygmatem obliczeniowym. Skomplikowany charakter systemów rozproszonych i gridowych oraz rosnąca złożoność projektowanych aplikacji sprawia, że na każdym etapie tworzenia systemu informatycznego konieczne staje się użycie narzędzi wspierających ten proces. Aby uniknąć zakłócenia pracy jednego narzędzia przez prace innego, narzędzia te muszą współpracować. Zdolność ta nazywana jest interoperabilnością. Interoperabilność można rozpatrywać na kilku poziomach, począwszy od wspólnego formatu danych, a skończywszy na poziomie semantycznym, na którym jedno z narzędzi reaguje wykonaniem pewnej akcji w odpowiedzi na zdarzenie wygenerowane przez inne z narzędzi. W artykule przedstawiono kilka modeli interoperabilności, opisując zalety i wady każdego z nich. Przedstawiono równiez model zastosowany w JINEXT, rozszerzeniu specyfikacji OMIS o mechanizm interoperabilności
Scaling evolutionary programming with the use of apache spark
Organizations across the globe gather more and more data, encouraged by easy-to-use and cheap cloud storage services. Large datasets require new approaches to analysis and processing, which include methods based on machine learning. In particular, symbolic regression can provide many useful insights. Unfortunately, due to high resource requirements, use of this method for large-scale dataset analysis might be unfeasible. In this paper, we analyze a bottleneck in the open-source implementation of this method we call hubert. We identify that the evaluation of individuals is the most costly operation. As a solution to this problem, we propose a new evaluation service based on the Apache Spark framework, which attempts to speed up computations by executing them in a distributed manner on a cluster of machines. We analyze the performance of the service by comparing the evaluation execution time of a number of samples with the use of both implementations. Finally, we draw conclusions and outline plans for further research
Scaling evolutionary programming with the use of apache spark
Organizations across the globe gather more and more data, encouraged by easy-to-use and cheap cloud storage services. Large datasets require new approaches to analysis and processing, which include methods based on machine learning. In particular, symbolic regression can provide many useful insights. Unfortunately, due to high resource requirements, use of this method for large-scale dataset analysis might be unfeasible. In this paper, we analyze a bottleneck in the open-source implementation of this method we call hubert. We identify that the evaluation of individuals is the most costly operation. As a solution to this problem, we propose a new evaluation service based on the Apache Spark framework, which attempts to speed up computations by executing them in a distributed manner on a cluster of machines. We analyze the performance of the service by comparing the evaluation execution time of a number of samples with the use of both implementations. Finally, we draw conclusions and outline plans for further research
