694 research outputs found
Building upon Fast Multipole Methods to Detect and Model Organizations
Many models in natural and social sciences are comprised of sets of
inter-acting entities whose intensity of interaction decreases with distance.
This often leads to structures of interest in these models composed of dense
packs of entities. Fast Multipole Methods are a family of methods developed to
help with the calculation of a number of computable models such as described
above. We propose a method that builds upon FMM to detect and model the dense
structures of these systems
GraphStream: A Tool for bridging the gap between Complex Systems and Dynamic Graphs
The notion of complex systems is common to many domains, from Biology to
Economy, Computer Science, Physics, etc. Often, these systems are made of sets
of entities moving in an evolving environment. One of their major
characteristics is the emergence of some global properties stemmed from local
interactions between the entities themselves and between the entities and the
environment. The structure of these systems as sets of interacting entities
leads researchers to model them as graphs. However, their understanding
requires most often to consider the dynamics of their evolution. It is indeed
not relevant to study some properties out of any temporal consideration. Thus,
dynamic graphs seem to be a very suitable model for investigating the emergence
and the conservation of some properties. GraphStream is a Java-based library
whose main purpose is to help researchers and developers in their daily tasks
of dynamic problem modeling and of classical graph management tasks: creation,
processing, display, etc. It may also be used, and is indeed already used, for
teaching purpose. GraphStream relies on an event-based engine allowing several
event sources. Events may be included in the core of the application, read from
a file or received from an event handler
Assessment of high (diurnal) to low (seasonal) frequency variations of isoprene emission rates using a neural network approach
Using a statistical approach based on artificial neural networks, an emission algorithm (ISO-LF) accounting for high to low frequency variations was developed for isoprene emission rates. ISO-LF was optimised using a data base (ISO-DB) specifically designed for this work, which consists of 1321 emission rates collected in the literature and 34 environmental variables, measured or assessed using National Climatic Data Center or National Centers for Environmental Predictions meteorological databases. ISO-DB covers a large variety of emitters (25 species) and environmental conditions (10&deg; S to 60&deg; N). When only instantaneous environmental regressors (instantaneous air temperature <i>T0</i> and photosynthetic photon flux density <i>L0</i>) were used, a maximum of 60% of the overall isoprene variability was assessed with the highest emissions being strongly underestimated. ISO-LF includes a total of 9 high (instantaneous) to low (up to 3 weeks) frequency regressors and accounts for up to 91% of the isoprene emission variability, whatever the emission range, species or climate investigated. ISO-LF was found to be mainly sensitive to air temperature cumulated over 3 weeks (<i>T21</i>) and to <i>L0</i> and <i>T0</i> variations. <i>T21</i>, <i>T0</i> and <i>L0</i> only accounts for 76% of the overall variability
Batsim: a Realistic Language-Independent Resources and Jobs Management Systems Simulator
International audienceAs large scale computation systems are growing to exascale, Resources and Jobs Management Systems (RJMS) need to evolve to manage this scale modification. However, their study is problematic since they are critical production systems, where experimenting is extremely costly due to downtime and energy costs. Meanwhile, many scheduling algorithms emerging from theoretical studies have not been transferred to production tools for lack of realistic experimental validation. To tackle these problems we propose Batsim, an extendable, language-independent and scalable RJMS simulator. It allows researchers and engineers to test and compare any scheduling algorithm, using a simple event-based communication interface, which allows different levels of realism. In this paper we show that Batsim's behaviour matches the one of the real RJMS OAR. Our evaluation process was made with reproducibility in mind and all the experiment material is freely available
Multi-Objective Group Discovery on the Social Web (Technical Report)
Les rapports de recherche du LIG - ISSN: 2105-0422We are interested in discovering user groups from collabo-rative rating datasets of the form i, u, s, where i ∈ I, u ∈ U, and s is the integer rating that user u has assigned to item i. Each user has a set of attributes that help find labeled groups such as young computer scientists in France and American female designers. We formalize the problem of finding user groups whose quality is optimized in multiple dimensions and show that it is NP-Complete. We develop α-MOMRI, an α-approximation algorithm, and h-MOMRI, a heuristic-based algorithm , for multi-objective optimization to find high quality groups. Our extensive experiments on real datasets from the social Web examine the performance of our algorithms and report cases where α-MOMRI and h-MOMRI are useful
Communication models insights meet simulations
International audienceIt is well-known that taking into account communications while scheduling jobs in large scale parallel computing platforms is a crucial issue. In modern hierarchical platforms, communication times are highly different when occurring inside a cluster or between clusters. Thus, allocating the jobs taking into account locality constraints is a key factor for reaching good performances. However, several theoretical results prove that imposing such constraints reduces the solution space and thus, possibly degrades the performances. In practice, such constraints simplify implementations and most often lead to better results. Our aim in this work is to bridge theoretical and practical intuitions, and check the differences between constrained and unconstrained schedules (namely with respect to locality and node contiguity) through simulations. We have developped a generic tool, using SimGrid as the base simulator, enabling interactions with external batch schedulers to evaluate their scheduling policies. The results confirm that insights gained through theoretical models are ill-suited to current architectures and should be reevaluated
A web-based platform for people with memory problems and their caregivers (CAREGIVERSPRO-MMD): Mixed-methods evaluation of usability
Background: The increasing number of people with dementia (PwD) drives research exploring Web-based support interventions to provide effective care for larger populations. In this concept, a Web-based platform (CAREGIVERSPRO-MMD, 620911) was designed to (1) improve the quality of life for PwD, (2) reduce caregiver burden, (3) reduce the financial costs for care, and (4) reduce administration time for health and social care professionals. Objective: The objective of this study was to evaluate the usability and usefulness of CAREGIVERSPRO-MMD platform for PwD or mild cognitive impairment (MCI), informal caregivers, and health and social care professionals with respect to a wider strategy followed by the project to enhance the user-centered approach. A secondary aim of the study was to collect recommendations to improve the platform before the future pilot study. Methods: A mixed methods design was employed for recruiting PwD or MCI (N=24), informal caregivers (N=24), and professionals (N=10). Participants were asked to rate their satisfaction, the perceived usefulness, and ease of use of each function of the platform. Qualitative questions about the improvement of the platform were asked when participants provided low scores for a function. Testing occurred at baseline and 1 week after participants used the platform. The dropout rate from baseline to the follow-up was approximately 10% (6/58). Results: After 1 week of platform use, the system was useful for 90% (20.75/23) of the caregivers and for 89% (5.36/6) of the professionals. When users responded to more than 1 question per platform function, the mean of satisfied users per function was calculated. These user groups also provided positive evaluations for the ease of use (caregivers: 82%, 18.75/23; professionals: 97%, 5.82/6) and their satisfaction with the platform (caregivers: 79%, 18.08/23; professionals: 73%, 4.36/6). Ratings from PwD were lower than the other groups for usefulness (57%, 13/23), ease of use (41%, 9.4/23), and overall satisfaction (47%, 11/23) with the platform (
Investigations on path indexing for graph databases
Graph databases have become an increasingly popular choice for the management of the massive network data sets arising in many contemporary applications. We investigate the effectiveness of path indexing for accelerating query processing in graph database systems, using as an exemplar the widely used open-source Neo4j graph database. We present a novel path index design which supports efficient ordered access to paths in a graph dataset. Our index is fully persistent and designed for external memory storage and retrieval. We also describe a compression scheme that exploits the limited differences between consecutive keys in the index, as well as a workload-driven approach to indexing. We demonstrate empirically the speed-ups achieved by our implementation, showing that the path index yields query run-times from 2x up to 8000x faster than Neo4j. Empirical evaluation also shows that our scheme leads to smaller indexes than using general-purpose LZ4 compression. The complete stand-alone implementation of our index, as well as supporting tooling such as a bulk-loader, are provided as open source for further research and development
Job Scheduling Using successive Linear Programming Approximations of a Sparse Model
EuroPar 2012In this paper we tackle the well-known problem of scheduling a collection of parallel jobs on a set of processors either in a cluster or in a multiprocessor computer. For the makespan objective, i.e., the completion time of the last job, this problem has been shown to be NP-Hard and several heuristics have already been proposed to minimize the execution time. We introduce a novel approach based on successive linear programming (LP) approximations of a sparse model. The idea is to relax an integer linear program and use lp norm-based operators to force the solver to find almost-integer solutions that can be assimilated to an integer solution. We consider the case where jobs are either rigid or moldable. A rigid parallel job is performed with a predefined number of processors while a moldable job can define the number of processors that it is using just before it starts its execution. We compare the scheduling approach with the classic Largest Task First list based algorithm and we show that our approach provides good results for small instances of the problem. The contributions of this paper are both the integration of mathematical methods in the scheduling world and the design of a promising approach which gives good results for scheduling problems with less than a hundred processors
Challenges in technology transfer: an actor perspective in a quadruple helix environment
© 2016, Springer Science+Business Media New York. This article presents and tests a knowledge and technology transfer framework in a quadruple helix environment, from an actor perspective. The Canadian forest products industry provides a unique opportunity for data collection through case studies as it is an industry built on a triple bottom line, which is managed for sustainable progress. By confronting the new framework to 31 professionals, we highlight the role and challenges faced by each helix. Several factors such as culture, time horizon management and the adaption of theory to practice appear to be determinant to improve technology transfer. We see in our work an important contribution to the generalization of knowledge and technology transfer processes in a quadruple helix environment
- …
