268 research outputs found
Learning a Static Analyzer from Data
To be practically useful, modern static analyzers must precisely model the
effect of both, statements in the programming language as well as frameworks
used by the program under analysis. While important, manually addressing these
challenges is difficult for at least two reasons: (i) the effects on the
overall analysis can be non-trivial, and (ii) as the size and complexity of
modern libraries increase, so is the number of cases the analysis must handle.
In this paper we present a new, automated approach for creating static
analyzers: instead of manually providing the various inference rules of the
analyzer, the key idea is to learn these rules from a dataset of programs. Our
method consists of two ingredients: (i) a synthesis algorithm capable of
learning a candidate analyzer from a given dataset, and (ii) a counter-example
guided learning procedure which generates new programs beyond those in the
initial dataset, critical for discovering corner cases and ensuring the learned
analysis generalizes to unseen programs.
We implemented and instantiated our approach to the task of learning
JavaScript static analysis rules for a subset of points-to analysis and for
allocation sites analysis. These are challenging yet important problems that
have received significant research attention. We show that our approach is
effective: our system automatically discovered practical and useful inference
rules for many cases that are tricky to manually identify and are missed by
state-of-the-art, manually tuned analyzers
Formal Verification of Neural Network Controlled Autonomous Systems
In this paper, we consider the problem of formally verifying the safety of an
autonomous robot equipped with a Neural Network (NN) controller that processes
LiDAR images to produce control actions. Given a workspace that is
characterized by a set of polytopic obstacles, our objective is to compute the
set of safe initial conditions such that a robot trajectory starting from these
initial conditions is guaranteed to avoid the obstacles. Our approach is to
construct a finite state abstraction of the system and use standard
reachability analysis over the finite state abstraction to compute the set of
the safe initial states. The first technical problem in computing the finite
state abstraction is to mathematically model the imaging function that maps the
robot position to the LiDAR image. To that end, we introduce the notion of
imaging-adapted sets as partitions of the workspace in which the imaging
function is guaranteed to be affine. We develop a polynomial-time algorithm to
partition the workspace into imaging-adapted sets along with computing the
corresponding affine imaging functions. Given this workspace partitioning, a
discrete-time linear dynamics of the robot, and a pre-trained NN controller
with Rectified Linear Unit (ReLU) nonlinearity, the second technical challenge
is to analyze the behavior of the neural network. To that end, we utilize a
Satisfiability Modulo Convex (SMC) encoding to enumerate all the possible
segments of different ReLUs. SMC solvers then use a Boolean satisfiability
solver and a convex programming solver and decompose the problem into smaller
subproblems. To accelerate this process, we develop a pre-processing algorithm
that could rapidly prune the space feasible ReLU segments. Finally, we
demonstrate the efficiency of the proposed algorithms using numerical
simulations with increasing complexity of the neural network controller
Quantum computing implementations with neutral particles
We review quantum information processing with cold neutral particles, that
is, atoms or polar molecules. First, we analyze the best suited degrees of
freedom of these particles for storing quantum information, and then we discuss
both single- and two-qubit gate implementations. We focus our discussion mainly
on collisional quantum gates, which are best suited for atom-chip-like devices,
as well as on gate proposals conceived for optical lattices. Additionally, we
analyze schemes both for cold atoms confined in optical cavities and hybrid
approaches to entanglement generation, and we show how optimal control theory
might be a powerful tool to enhance the speed up of the gate operations as well
as to achieve high fidelities required for fault tolerant quantum computation.Comment: 19 pages, 12 figures; From the issue entitled "Special Issue on
Neutral Particles
Strong Interactions of Single Atoms and Photons near a Dielectric Boundary
Modern research in optical physics has achieved quantum control of strong
interactions between a single atom and one photon within the setting of cavity
quantum electrodynamics (cQED). However, to move beyond current
proof-of-principle experiments involving one or two conventional optical
cavities to more complex scalable systems that employ N >> 1 microscopic
resonators requires the localization of individual atoms on distance scales <
100 nm from a resonator's surface. In this regime an atom can be strongly
coupled to a single intracavity photon while at the same time experiencing
significant radiative interactions with the dielectric boundaries of the
resonator. Here, we report an initial step into this new regime of cQED by way
of real-time detection and high-bandwidth feedback to select and monitor single
Cesium atoms localized ~100 nm from the surface of a micro-toroidal optical
resonator. We employ strong radiative interactions of atom and cavity field to
probe atomic motion through the evanescent field of the resonator. Direct
temporal and spectral measurements reveal both the significant role of
Casimir-Polder attraction and the manifestly quantum nature of the atom-cavity
dynamics. Our work sets the stage for trapping atoms near micro- and
nano-scopic optical resonators for applications in quantum information science,
including the creation of scalable quantum networks composed of many
atom-cavity systems that coherently interact via coherent exchanges of single
photons.Comment: 8 pages, 5 figures, Supplemental Information included as ancillary
fil
Suggestions for improving the design of clinical trials in multiple sclerosis - results of a systematic analysis of completed phase III trials
This manuscript reviews the primary and secondary endpoints of pivotal phase III trials with immunomodulatory drugs in multiple sclerosis (MS). Considering the limitations of previous trial designs, we propose new standards for the planning of clinical trials, taking into account latest insights into MS pathophysiology and patient-relevant aspects. Using a systematic overview of published phase III (pivotal) trials performed as part of application for drug market approval, we evaluate the following characteristics: trial duration, number of trial participants, comparators, and endpoints (primary, secondary, magnetic resonance imaging outcome, and patient-reported outcomes). From a patient perspective, the primary and secondary endpoints of clinical trials are only partially relevant. High-quality trial data pertaining to efficacy and safety that stretch beyond the time frame of pivotal trials are almost non-existent. Understanding of long-term benefits and risks of disease-modifying MS therapy is largely lacking. Concrete proposals for the trial designs of relapsing (remitting) multiple sclerosis/clinically isolated syndrome, primary progressive multiple sclerosis, and secondary progressive multiple sclerosis (e.g., study duration, mechanism of action, and choice of endpoints) are presented based on the results of the systematic overview. Given the increasing number of available immunotherapies, the therapeutic strategy in MS has shifted from a mere "relapse-prevention" approach to a personalized provision of medical care as to the choice of the appropriate drugs and their sequential application over the course of the disease. This personalized provision takes patient preferences as well as disease-related factors into consideration such as objective clinical and radiographic findings but also very burdensome symptoms such as fatigue, depression, and cognitive impairment. Future trial designs in MS will have to assign higher relevance to these patient-reported outcomes and will also have to implement surrogate measures that can serve as predictive markers for individual treatment response to new and investigational immunotherapies. This is an indispensable prerequisite to maximize the benefit of individual patients when participating in clinical trials. Moreover, such appropriate trial designs and suitable enrolment criteria that correspond to the mode of action of the study drug will facilitate targeted prevention of adverse events, thus mitigating risks for individual study participants
Translational toxicology in setting occupational exposure limits for dusts and hazard classification – a critical evaluation of a recent approach to translate dust overload findings from rats to humans
Background
We analyze the scientific basis and methodology used by the German MAK Commission in their recommendations for exposure limits and carcinogen classification of “granular biopersistent particles without known specific toxicity” (GBS). These recommendations are under review at the European Union level. We examine the scientific assumptions in an attempt to reproduce the results. MAK’s human equivalent concentrations (HECs) are based on a particle mass and on a volumetric model in which results from rat inhalation studies are translated to derive occupational exposure limits (OELs) and a carcinogen classification.
Methods
We followed the methods as proposed by the MAK Commission and Pauluhn 2011. We also examined key assumptions in the metrics, such as surface area of the human lung, deposition fractions of inhaled dusts, human clearance rates; and risk of lung cancer among workers, presumed to have some potential for lung overload, the physiological condition in rats associated with an increase in lung cancer risk.
Results
The MAK recommendations on exposure limits for GBS have numerous incorrect assumptions that adversely affect the final results. The procedures to derive the respirable occupational exposure limit (OEL) could not be reproduced, a finding raising considerable scientific uncertainty about the reliability of the recommendations. Moreover, the scientific basis of using the rat model is confounded by the fact that rats and humans show different cellular responses to inhaled particles as demonstrated by bronchoalveolar lavage (BAL) studies in both species.
Conclusion
Classifying all GBS as carcinogenic to humans based on rat inhalation studies in which lung overload leads to chronic inflammation and cancer is inappropriate. Studies of workers, who have been exposed to relevant levels of dust, have not indicated an increase in lung cancer risk. Using the methods proposed by the MAK, we were unable to reproduce the OEL for GBS recommended by the Commission, but identified substantial errors in the models. Considerable shortcomings in the use of lung surface area, clearance rates, deposition fractions; as well as using the mass and volumetric metrics as opposed to the particle surface area metric limit the scientific reliability of the proposed GBS OEL and carcinogen classification.International Carbon Black Associatio
Micromechanical Properties of Injection-Molded Starch–Wood Particle Composites
The micromechanical properties of injection molded starch–wood particle composites were investigated as a function of particle content and humidity conditions.
The composite materials were characterized by scanning electron microscopy and X-ray diffraction methods. The microhardness
of the composites was shown to increase notably with the concentration of the wood particles. In addition,creep behavior under the indenter and temperature dependence
were evaluated in terms of the independent contribution of the starch matrix and the wood microparticles to the hardness value. The influence of drying time on the density
and weight uptake of the injection-molded composites was highlighted. The results revealed the role of the mechanism of water evaporation, showing that the dependence of water uptake and temperature was greater for the starch–wood composites than for the pure starch sample. Experiments performed during the drying process at 70°C indicated that
the wood in the starch composites did not prevent water loss from the samples.Peer reviewe
Calcaneal nonunion: three cases and a review of the literature
The long-term follow-up of intra-articular calcaneal fractures is often accompanied by complications. Frequently occurring are arthrosis, arthrofibrosis of the subtalar joint, and malunion. Uncommon is the calcaneal nonunion. A total of three cases is presented in this report, including a review of the literature. The occurrence of a nonunion appears to be more common after conservative treatment, but the pathophysiology remains unclear, however smoking may play a role
Mucociliary and long-term particle clearance in airways of patients with immotile cilia
Spherical monodisperse ferromagnetic iron oxide particles of 1.9 μm geometric and 4.2 μm aerodynamic diameter were inhaled by seven patients with primary ciliary dyskinesia (PCD) using the shallow bolus technique, and compared to 13 healthy non-smokers (NS) from a previous study. The bolus penetration front depth was limiting to the phase1 dead space volume. In PCD patients deposition was 58+/-8 % after 8 s breath holding time. Particle retention was measured by the magnetopneumographic method over a period of nine months. Particle clearance from the airways showed a fast and a slow phase. In PCD patients airway clearance was retarded and prolonged, 42+/-12 % followed the fast phase with a mean half time of 16.8+/-8.6 hours. The remaining fraction was cleared slowly with a half time of 121+/-25 days. In healthy NS 49+/-9 % of particles were cleared in the fast phase with a mean half time of 3.0+/-1.6 hours, characteristic of an intact mucociliary clearance. There was no difference in the slow clearance phase between PCD patients and healthy NS. Despite non-functioning cilia the effectiveness of airway clearance in PCD patients is comparable to healthy NS, with a prolonged kinetics of one week, which may primarily reflect the effectiveness of cough clearance. This prolonged airway clearance allows longer residence times of bacteria and viruses in the airways and may be one reason for increased frequency of infections in PCD patients
The fading of reported effectiveness. A meta-analysis of randomised controlled trials
BACKGROUND: The "real" effect size of a medical therapy is constant over time. In contrast, the effect size reported in randomised controlled trials (RCTs) may change over time because the sum of all kinds of bias influencing the reported effectiveness is not necessarily constant. As this would affect the validity of meta-analyses, we tested the hypothesis that the reported effect size decreases over time. Furthermore, we tested three hypotheses that would explain a possible change. METHODS: Because of well established outcome measures, the lipid-lowering drugs Pravastatin and Atorvastatin (serum low-density lipoprotein cholesterol, LDL-C) and the anti-glaucoma drugs Timolol and Latanoprost (intraocular pressure, IOP) were chosen for this investigation. Studies were identified by a standardized MEDLINE search. RCTs investigating the above identified medications administered as monotherapy, and in defined dosages, were included. Publication year, baseline (= pre-treatment value in the treatment group of interest) and post intervention means, number of patients and the assignment to experimental or control group were extracted for each study. RESULTS: A total of 625 citations were screened; 206 met the inclusion criteria. The reported effect size of Pravastatin (change of reported effect size in five years: -3.22% LDL-C, P < .0001), Timolol (-0.56 mmHg, P < .0001) and Latanoprost (-1.78 mmHg, P = .0074) decreased over time, while there was no significant change for Atorvastatin (+0.31% LDL-C, P = .8618). Multiple regression analysis showed that baseline values were the most important influencing factor; study size or treatment group did not play a significant role. CONCLUSION: The effectiveness of medical therapies reported in RCTs decreases over time in three of the four investigated pharmaceuticals, caused mainly by baseline differences. We call this phenomenon "fading of reported effectiveness". Under this condition the validity of a meta-analysis may be impaired. Therefore we propose to observe this phenomenon in future meta-analyses in order to guarantee a maximum of transparency
- …
