669 research outputs found
COLLABORATIVE CONSUMPTION THROUGH NEW TECHNOLOGIES
The goal of the present paper is to present the raise of collaborative consumption through new technologies, and find out the impact of this new economic system on the contemporary society.
In a first place the theoretical part provide an overview of the collaborative consumption in general, with definitions, consideration of the levels of sharing economy, and main success of the collaborative organizations in the 21th century. Then a part highlighting the key role of new technologies and internet for the development of this concept is mentioned, with a notion of access, trust and role played by communities in this system. Nonetheless the reasons of involvement to this sharing economy are given and explanations about how sharing economy is giving answered to issues of this century, followed by the problems concerning the impact on traditional businesses.
The experiment, survey and interview were aimed at testing the knowledge of the respondents on the concept, discover which reasons influence the involvement and find out which place do they give to sharing economy as a change of the consuming behaviors. Data for the research were collected from 108 respondents, through an online survey. Subjects were randomly selected. The interview was conduct with a Finnish user of couch surfing, a collaborative lifestyle.
This study support the view that collaborative consumption implies modification in the economic system nowadays, and that it could redefine the relation of ownership of the hyper-consumption times. Respondents seemed mainly willing to try and adopt more regular collaborative behaviors.
In addition developments of new technologies, synchronized with new reputation systems building the trust are going to empower users and give strength to collaborative consumption. Nevertheless the study questions the boundaries of collaborative economy, and the relation with capitalism model. Also it will have to face a good management of data to promote trust, the essence of this new system. To succeed collaborative economy should work and balance with traditional businesses and states’ institutions to manage a good transition
SimGrid: a Sustained Effort for the Versatile Simulation of Large Scale Distributed Systems
In this paper we present Simgrid, a toolkit for the versatile simulation of
large scale distributed systems, whose development effort has been sustained
for the last fifteen years. Over this time period SimGrid has evolved from a
one-laboratory project in the U.S. into a scientific instrument developed by an
international collaboration. The keys to making this evolution possible have
been securing of funding, improving the quality of the software, and increasing
the user base. In this paper we describe how we have been able to make advances
on all three fronts, on which we plan to intensify our efforts over the
upcoming years.Comment: 4 pages, submission to WSSSPE'1
Dynamic Performance Forecasting for Network-Enabled Servers in a Heterogeneous Environment
This paper presents a tool for dynamic forecasting of Network-Enabled Servers performance. FAST (Fast Agent's System Timer}) is a software package allowing client applications to get an accurate forecast of communicat- ion and computation times and memory use in a heterogeneous environment. It relies on low level software packages, i.e., network and host monitoring tools, and some of our developments in computation routines modeling. The FAST internals and user interface are presented and a comparison between the execution time predicted by FAST and the measured time of complex matrix multiplication executed on an heterogeneous platform is given
Assessing the Performance of MPI Applications Through Time-Independent Trace Replay
International audienceSimulation is a popular approach to obtain objective performance indicators platforms that are not at one's disposal. It may help the dimensioning of compute clusters in large computing centers. In this work we present a framework for the off-line simulation of MPI applications. Its main originality with regard to the literature is to rely on time-independent execution traces. This allows us to completely decouple the acquisition process from the actual replay of the traces in a simulation context. Then we are able to acquire traces for large application instances without being limited to an execution on a single compute cluster. Finally our framework is built on top of a scalable, fast, and validated simulation kernel. In this paper, we present the used time-independent trace format, investigate several acquisition strategies, detail the developed trace replay tool, and assess the quality of our simulation framework in terms of accuracy, acquisition time, simulation time, and trace size.La simulation est une approche très populaire pour obtenir des indicateurs de performances objectifs sur des plates-formes qui ne sont pas disponibles. Cela peut permettre le dimensionnement de grappes de calculs au sein de grands centres de calcul. Dans cet article nous présentons un outil de simulation post-mortem d'applications MPI. Sa principale originalité au regard de la littérature est d'utiliser des traces d'exécution indépendantes du temps. Cela permet de découpler intégralement le processus d'acquisition des traces de celui de rejeu dans un contexte de simulation. Il est ainsi possible d'obtenir des traces pour de grandes instances de problèmes sans être limité à des exécutions au sein d'une unique grappe. Enfin notre outil est développé au dessus d'un noyau de simulation scalable, rapide et validé. Cet article présente le format de traces indépendantes du temps utilisé, étudie plusieurs stratégies d'acquisition, détaille l'outil de rejeu que nous avons dévelopé, et evalué la qualité de nos simulations en termes de précision, temps d'acuisition, temps de simulation et tailles de traces
Versatile, Scalable, and Accurate Simulation of Distributed Applications and Platforms
International audienceThe study of parallel and distributed applications and platforms, whether in the cluster, grid, peer-to-peer, volunteer, or cloud computing domain, often mandates empirical evaluation of proposed algorithmic and system solutions via simulation. Unlike direct experimentation via an application deployment on a real-world testbed, simulation enables fully repeatable and configurable experiments for arbitrary hypothetical scenarios. Two key concerns are accuracy (so that simulation results are scientifically sound) and scalability (so that simulation experiments can be fast and memory-efficient). While the scalability of a simulator is easily measured, the accuracy of many state-of-the-art simulators is largely unknown because they have not been sufficiently validated. In this work we describe recent accuracy and scalability advances made in the context of the SimGrid simulation framework. A design goal of SimGrid is that it should be versatile, i.e., applicable across all aforementioned domains. We present quantitative results that show that SimGrid compares favorably to state-of-the-art domain-specific simulators in terms of scalability, accuracy, or the trade-off between the two. An important implication is that, contrary to popular wisdom, striving for versatility in a simulator is not an impediment but instead is conducive to improving both accuracy and scalability
Program Termination and Worst Time Complexity with Multi-Dimensional Affine Ranking Functions
A standard method for proving the termination of a flowchart program is to exhibit a ranking function, i.e., a function from the program states to a well-founded set, which strictly decreases at each program step. Our main contribution is to give an efficient algorithm for the automatic generation of multi-dimensional affine nonnegative ranking functions, a restricted class of ranking functions that can be handled with linear programming techniques. Our algorithm is based on the combination of the generation of invariants (a technique from abstract interpretation) and on an adaptation of multi-dimensional affine scheduling (a technique from automatic parallelization). We also prove the completeness of our technique with respect to its input and the class of rankings we consider. Finally, as a byproduct, by computing the cardinal of the range of the ranking function, we obtain an upper bound for the computational complexity of the source program, which does not depend on restrictions on the shape of loops or on program structure. This estimate is a polynomial, which means that we can handle programs with more than linear complexity. The method is tested on a large collection of test cases from the literature. We also point out future improvements to handle larger programs
Improving Simulations of MPI Applications Using A Hybrid Network Model with Topology and Contention Support
Proper modeling of collective communications is essential for understanding the behavior of medium-to-large scale parallel applications, and even minor deviations in implementation can adversely affect the prediction of real-world performance. We propose a hybrid network model extending LogP based approaches to account for topology and contention in high-speed TCP networks. This model is validated within SMPI, an MPI implementation provided by the SimGrid simulation toolkit. With SMPI, standard MPI applications can be compiled and run in a simulated network environment, and traces can be captured without incurring errors from tracing overheads or poor clock synchronization as in physical experiments. SMPI provides features for simulating applications that require large amounts of time or resources, including selective execution, ram folding, and off-line replay of execution traces. We validate our model by comparing traces produced by SMPI with those from other simulation platforms, as well as real world environments.Une bonne modélisation des communications collective est indispensable à la compréhension des performances des applications parallèles et des différences, même minimes, dans leur implémentation peut drastiquement modifier les performances escomptées. Nous proposons un modèle réseau hybrid étendant les approches de type LogP mais permettant de rendre compte de la topologie et de la contention pour les réseaux hautes performances utilisant TCP. Ce modèle est mis en oeuvre et validé au sein de SMPI, une implémentation de MPI fournie par l'environnement SimGrid. SMPI permet de compiler et d'exécuter sans modification des applications MPI dans un environnement simulé. Il est alors possible de capturer des traces sans l'intrusivité ni les problème de synchronisation d'horloges habituellement rencontrés dans des expériences réelles. SMPI permet également de simuler des applications gourmandes en mémoire ou en temps de calcul à l'aide de techniques telles l'exécution sélective, le repliement mémoire ou le rejeu hors-ligne de traces d'exécutions. Nous validons notre modèle en comparant les traces produites à l'aide de SMPI avec celles de traces d'exécution réelle. Nous montrons le gain obtenu en les comparant également à celles obtenues avec des modèles plus classiques utilisés dans des outils concurrents
Yield conditions for deformation of amorphous polymer glasses
Shear yielding of glassy polymers is usually described in terms of the
pressure-dependent Tresca or von Mises yield criteria. We test these criteria
against molecular dynamics simulations of deformation in amorphous polymer
glasses under triaxial loading conditions that are difficult to realize in
experiments. Difficulties and ambiguities in extending several standard
definitions of the yield point to triaxial loads are described. Two
definitions, the maximum and offset octahedral stresses, are then used to
evaluate the yield stress for a wide range of model parameters. In all cases,
the onset of shear is consistent with the pressure-modified von Mises
criterion, and the pressure coefficient is nearly independent of many
parameters. Under triaxial tensile loading, the mode of failure changes to
cavitation.Comment: 9 pages, 8 figures, revte
Application of ab initio calculations to collagen and brome mosaic virus
Title from PDF of title page, viewed on July 16, 2014VitaThesis advisor: Wai-Yim ChingIncludes bibliographic references (pages 85-91)In bio-related research, large proteins are of important interest. We study two such proteins. Collagen contains one such protein, the collagen triple-helix, which forms part of the structural matrix for animals, such as in their bones and teeth. 1JS9 is another protein that is a component of the protein shell of the brome mosaic virus (BMV). And BMV is important for drug delivery and imaging. To better understand the properties of these proteins, quantum mechanically (QM) based results are needed, however computationally feasible methods are also necessary. The Orthogonalized Linear Combination of Atomic Orbitals (OLCAO) method is wellsuited for application to such large proteins. However, a new approach to reduce the computational cost and increase the computational feasibility is required and this
extension to the method we call the Amino-Acid Based Method (AAPM) of OLCAO. In brief, the AAPM calculates electronic, self-consistent fi eld (scf) potentials for individual amino-acids with their neighboring amino-acids included as a boundary
condition. This allows the costly scf part of the calculation to be skipped out. Additionally, the number of potentials used to describe the 1JS9 protein is also minimized. Results for e ective charge and bond order are obtained and analyzed for Collagen and preliminary eff ective charge results are obtained for 1JS9. The e ffective charge results of the AAPM represent well those already obtained with the scf OLCAO result, but with reduced cost and preserved accuracy. The bond order results for Collagen also represent well the hydrogen bonding based on bond distances observed in experimentally-dervied images between the individual chains of the collagen
triple-helix as well as the observed hydrogen bonding networkAbstract -- List of illustrations -- List of tables -- Acknowledgments -- Introduction -- Theoretical background -- Method -- Models for collagen and the amino acid potential method -- Collagen results -- Brome mosaic virus results -- Future work -- Initial input file for AAPM -- AAPM programs and potential reduction method -- Reference
Growth, microstructure, and failure of crazes in glassy polymers
We report on an extensive study of craze formation in glassy polymers.
Molecular dynamics simulations of a coarse-grained bead-spring model were
employed to investigate the molecular level processes during craze nucleation,
widening, and breakdown for a wide range of temperature, polymer chain length
, entanglement length and strength of adhesive interactions between
polymer chains. Craze widening proceeds via a fibril-drawing process at
constant drawing stress. The extension ratio is determined by the entanglement
length, and the characteristic length of stretched chain segments in the
polymer craze is . In the craze, tension is mostly carried by the
covalent backbone bonds, and the force distribution develops an exponential
tail at large tensile forces. The failure mode of crazes changes from
disentanglement to scission for , and breakdown through scission
is governed by large stress fluctuations. The simulations also reveal
inconsistencies with previous theoretical models of craze widening that were
based on continuum level hydrodynamics
- …
