585 research outputs found
Assessing evaluation procedures for individual researchers: the case of the Italian National Scientific Qualification
The Italian National Scientific Qualification (ASN) was introduced as a
prerequisite for applying for tenured associate or full professor positions at
state-recognized universities. The ASN is meant to attest that an individual
has reached a suitable level of scientific maturity to apply for professorship
positions. A five member panel, appointed for each scientific discipline, is in
charge of evaluating applicants by means of quantitative indicators of impact
and productivity, and through an assessment of their research profile. Many
concerns were raised on the appropriateness of the evaluation criteria, and in
particular on the use of bibliometrics for the evaluation of individual
researchers. Additional concerns were related to the perceived poor quality of
the final evaluation reports. In this paper we assess the ASN in terms of
appropriateness of the applied methodology, and the quality of the feedback
provided to the applicants. We argue that the ASN is not fully compliant with
the best practices for the use of bibliometric indicators for the evaluation of
individual researchers; moreover, the quality of final reports varies
considerably across the panels, suggesting that measures should be put in place
to prevent sloppy practices in future ASN rounds
Quantitative Analysis of the Italian National Scientific Qualification
The Italian National Scientific Qualification (ASN) was introduced in 2010 as
part of a major reform of the national university system. Under the new
regulation, the scientific qualification for a specific role (associate or full
professor) and field of study is required to apply to a permanent professor
position. The ASN is peculiar since it makes use of bibliometric indicators
with associated thresholds as one of the parameters used to assess applicants.
Overall, more than 59000 applications were submitted, and the results have been
made publicly available for a short period of time, including the values of the
quantitative indicators for each applicant. The availability of this wealth of
information provides an opportunity to draw a fairly detailed picture of a
nation-wide evaluation exercise, and to study the impact of the bibliometric
indicators on the qualification results. In this paper we provide a first
account of the Italian ASN from a quantitative point of view. We show that
significant differences exist among scientific disciplines, in particular with
respect to the fraction of qualified applicants, that can not be easily
explained. Furthermore, we describe some issues related to the definition and
use of the bibliometric indicators and thresholds. Our analysis aims at drawing
attention to potential problems that should be addressed by decision-makers in
future ASN rounds.Comment: ISSN 1751-157
Parallel Sort-Based Matching for Data Distribution Management on Shared-Memory Multiprocessors
In this paper we consider the problem of identifying intersections between
two sets of d-dimensional axis-parallel rectangles. This is a common problem
that arises in many agent-based simulation studies, and is of central
importance in the context of High Level Architecture (HLA), where it is at the
core of the Data Distribution Management (DDM) service. Several realizations of
the DDM service have been proposed; however, many of them are either
inefficient or inherently sequential. These are serious limitations since
multicore processors are now ubiquitous, and DDM algorithms -- being
CPU-intensive -- could benefit from additional computing power. We propose a
parallel version of the Sort-Based Matching algorithm for shared-memory
multiprocessors. Sort-Based Matching is one of the most efficient serial
algorithms for the DDM problem, but is quite difficult to parallelize due to
data dependencies. We describe the algorithm and compute its asymptotic running
time; we complete the analysis by assessing its performance and scalability
through extensive experiments on two commodity multicore systems based on a
dual socket Intel Xeon processor, and a single socket Intel Core i7 processor.Comment: Proceedings of the 21-th ACM/IEEE International Symposium on
Distributed Simulation and Real Time Applications (DS-RT 2017). Best Paper
Award @DS-RT 201
A Framework for QoS-aware Execution of Workflows over the Cloud
The Cloud Computing paradigm is providing system architects with a new
powerful tool for building scalable applications. Clouds allow allocation of
resources on a "pay-as-you-go" model, so that additional resources can be
requested during peak loads and released after that. However, this flexibility
asks for appropriate dynamic reconfiguration strategies. In this paper we
describe SAVER (qoS-Aware workflows oVER the Cloud), a QoS-aware algorithm for
executing workflows involving Web Services hosted in a Cloud environment. SAVER
allows execution of arbitrary workflows subject to response time constraints.
SAVER uses a passive monitor to identify workload fluctuations based on the
observed system response time. The information collected by the monitor is used
by a planner component to identify the minimum number of instances of each Web
Service which should be allocated in order to satisfy the response time
constraint. SAVER uses a simple Queueing Network (QN) model to identify the
optimal resource allocation. Specifically, the QN model is used to identify
bottlenecks, and predict the system performance as Cloud resources are
allocated or released. The parameters used to evaluate the model are those
collected by the monitor, which means that SAVER does not require any
particular knowledge of the Web Services and workflows being executed. Our
approach has been validated through numerical simulations, whose results are
reported in this paper
The four-dimensional on-shell three-point amplitude in spinor-helicity formalism and BCFW recursion relations
Lecture notes on Poincar\'e-invariant scattering amplitudes and tree-level
recursion relations in spinor-helicity formalism. We illustrate the
non-perturbative constraints imposed over on-shell amplitudes by the Lorentz
Little Group, and review how they completely fix the three-point amplitude
involving either massless or massive particles. Then we present an introduction
to tree-level BCFW recursion relations, and some applications for massless
scattering, where the derived three-point amplitudes are employed.Comment: 41+2 pages, 4 figure
Analytic pseudo-Goldstone bosons
We consider the interplay between explicit and spontaneous symmetry breaking
in strongly coupled field theories. Some well-known statements, such as the
Gell-Mann-Oakes-Renner relation, descend directly from the Ward identities and
have thus a general relevance. Such Ward identities are recovered in
gauge/gravity dual setups through holographic renormalization. In a simple
paradigmatic three dimensional toy-model, we find analytic expressions for the
two-point correlators which match all the quantum field theoretical
expectations. Moreover, we have access to the full spectrum, which is
reminiscent of linear confinement.Comment: 20 pages, 4 figures, v2 minor correction
A Monitoring System for the BaBar INFN Computing Cluster
Monitoring large clusters is a challenging problem. It is necessary to
observe a large quantity of devices with a reasonably short delay between
consecutive observations. The set of monitored devices may include PCs, network
switches, tape libraries and other equipments. The monitoring activity should
not impact the performances of the system. In this paper we present PerfMC, a
monitoring system for large clusters. PerfMC is driven by an XML configuration
file, and uses the Simple Network Management Protocol (SNMP) for data
collection. SNMP is a standard protocol implemented by many networked
equipments, so the tool can be used to monitor a wide range of devices. System
administrators can display informations on the status of each device by
connecting to a WEB server embedded in PerfMC. The WEB server can produce
graphs showing the value of different monitored quantities as a function of
time; it can also produce arbitrary XML pages by applying XSL Transformations
to an internal XML representation of the cluster's status. XSL Transformations
may be used to produce HTML pages which can be displayed by ordinary WEB
browsers. PerfMC aims at being relatively easy to configure and operate, and
highly efficient. It is currently being used to monitor the Italian
Reprocessing farm for the BaBar experiment, which is made of about 200 dual-CPU
Linux machines.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics
(CHEP03), La Jolla, Ca, USA, March 2003, 10 pages, LaTeX, 4 eps figures. PSN
MOET00
Fault-Tolerant Adaptive Parallel and Distributed Simulation
Discrete Event Simulation is a widely used technique that is used to model
and analyze complex systems in many fields of science and engineering. The
increasingly large size of simulation models poses a serious computational
challenge, since the time needed to run a simulation can be prohibitively
large. For this reason, Parallel and Distributes Simulation techniques have
been proposed to take advantage of multiple execution units which are found in
multicore processors, cluster of workstations or HPC systems. The current
generation of HPC systems includes hundreds of thousands of computing nodes and
a vast amount of ancillary components. Despite improvements in manufacturing
processes, failures of some components are frequent, and the situation will get
worse as larger systems are built. In this paper we describe FT-GAIA, a
software-based fault-tolerant extension of the GAIA/ART\`IS parallel simulation
middleware. FT-GAIA transparently replicates simulation entities and
distributes them on multiple execution nodes. This allows the simulation to
tolerate crash-failures of computing nodes; furthermore, FT-GAIA offers some
protection against byzantine failures since synchronization messages are
replicated as well, so that the receiving entity can identify and discard
corrupted messages. We provide an experimental evaluation of FT-GAIA on a
running prototype. Results show that a high degree of fault tolerance can be
achieved, at the cost of a moderate increase in the computational load of the
execution units.Comment: Proceedings of the IEEE/ACM International Symposium on Distributed
Simulation and Real Time Applications (DS-RT 2016
- …
