4,793 research outputs found
Summability of the perturbative expansion for a zero-dimensional disordered spin model
We show analytically that the perturbative expansion for the free energy of
the zero dimensional (quenched) disordered Ising model is Borel-summable in a
certain range of parameters, provided that the summation is carried out in two
steps: first, in the strength of the original coupling of the Ising model and
subsequently in the variance of the quenched disorder. This result is
illustrated by some high-precision calculations of the free energy obtained by
a straightforward numerical implementation of our sequential summation method.Comment: LaTeX, 12 pages and 4 figure
Reduced Sampling for Construction of Quadratic Response Surface Approximations Using Adaptive Experimental Design
The purpose of this paper is to reduce the computational complexity per step from O(n^2) to O(n) for optimization based on quadratic surrogates, where n is the number of design variables. Applying nonlinear optimization strategies directly to complex
multidisciplinary systems can be prohibitively expensive when the complexity of the simulation codes is large. Increasingly, response surface approximations, and specifically quadratic approximations, are being integrated with nonlinear optimizers in order to reduce the CPU time required for the optimization of complex multidisciplinary systems. For evaluation by the optimizer, response surface approximations provide a computationally inexpensive lower fidelity representation of the system performance. The curse of dimensionality is a major drawback in the implementation of these approximations as the amount of required data grows quadratically with the number n of design variables in the problem. In this paper a novel technique to reduce the magnitude of the sampling from O(n^2) to O(n) is presented. The technique uses prior information to approximate the eigenvectors of the Hessian matrix of the response surface approximation and only requires the eigenvalues to be computed by response surface techniques. The technique is implemented in a sequential approximate optimization algorithm and applied to engineering problems of variable size and characteristics. Results demonstrate that a reduction in the data required per step from O(n^2) to O(n) points can be accomplished without significantly compromising the performance of the optimization algorithm. A reduction in the time (number of system analyses) required per step from O(n^2) to O(n) is significant, even more so as n increases. The novelty lies in how only O(n) system analyses can be used to approximate a Hessian matrix whose estimation normally requires O(n^2) system analyses
Homotopy methods for constraint relaxation in unilevel reliability based design optimization
Reliability based design optimization is a methodology for finding optimized designs
that are characterized with a low probability of failure. The main ob jective in reliability
based design optimization is to minimize a merit function while satisfying the reliability
constraints. The reliability constraints are constraints on the probability of failure corre-
sponding to each of the failure modes of the system or a single constraint on the system
probability of failure. The probability of failure is usually estimated by performing a relia-
bility analysis. During the last few years, a variety of different techniques have been devel-
oped for reliability based design optimization. Traditionally, these have been formulated
as a double-loop (nested) optimization problem. The upper level optimization loop gen-
erally involves optimizing a merit function sub ject to reliability constraints and the lower
level optimization loop(s) compute the probabilities of failure corresponding to the failure
mode(s) that govern the system failure. This formulation is, by nature, computationally
intensive. A new efficient unilevel formulation for reliability based design optimization was
developed by the authors in earlier studies. In this formulation, the lower level optimiza-
tion (evaluation of reliability constraints in the double loop formulation) was replaced by its corresponding first order Karush-Kuhn-Tucker (KKT) necessary optimality conditions
at the upper level optimization. It was shown that the unilevel formulation is computation-
ally equivalent to solving the original nested optimization if the lower level optimization is
solved by numerically satisfying the KKT conditions (which is typically the case), and the
two formulations are mathematically equivalent under constraint qualification and general-
ized convexity assumptions. In the unilevel formulation, the KKT conditions of the inner
optimization for each probabilistic constraint evaluation are imposed at the system level as
equality constraints. Most commercial optimizers are usually numerically unreliable when
applied to problems accompanied by many equality constraints. In this investigation an
optimization framework for reliability based design using the unilevel formulation is de-
veloped. Homotopy methods are used for constraint relaxation and to obtain a relaxed
feasible design. A series of optimization problems are solved as the relaxed optimization
problem is transformed via a homotopy to the original problem. A heuristic scheme is
employed in this paper to update the homotopy parameter. The proposed algorithm is
illustrated with example problems
Sharp spectral stability estimates via the Lebesgue measure of domains for higher order elliptic operators
We prove sharp stability estimates for the variation of the eigenvalues of
non-negative self-adjoint elliptic operators of arbitrary even order upon
variation of the open sets on which they are defined. These estimates are
expressed in terms of the Lebesgue measure of the symmetric difference of the
open sets. Both Dirichlet and Neumann boundary conditions are considered
New Finite Rogers-Ramanujan Identities
We present two general finite extensions for each of the two Rogers-Ramanujan
identities. Of these one can be derived directly from Watson's transformation
formula by specialization or through Bailey's method, the second similar
formula can be proved either by using the first formula and the q-Gosper
algorithm, or through the so-called Bailey lattice.Comment: 19 pages. to appear in Ramanujan
A Unified Account of the Moral Standing to Blame
Recently, philosophers have turned their attention to the question, not when a given agent is blameworthy for what she does, but when a further agent has the moral standing to blame her for what she does. Philosophers have proposed at least four conditions on having “moral standing”:
1. One’s blame would not be “hypocritical”.
2. One is not oneself “involved in” the target agent’s wrongdoing.
3. One must be warranted in believing that the target is indeed blameworthy for the wrongdoing.
4. The target’s wrongdoing must some of “one’s business”.
These conditions are often proposed as both conditions on one and the same thing, and as marking fundamentally different ways of “losing standing.” Here I call these claims into question. First, I claim that conditions (3) and (4) are simply conditions on different things than are conditions (1) and (2). Second, I argue that condition (2) reduces to condition (1): when “involvement” removes someone’s standing to blame, it does so only by indicating something further about that agent, viz., that he or she lacks commitment to the values that condemn the wrongdoer’s action. The result: after we clarify the nature of the non-hypocrisy condition, we will have a unified account of moral standing to blame. Issues also discussed: whether standing can ever be regained, the relationship between standing and our "moral fragility", the difference between mere inconsistency and hypocrisy, and whether a condition of standing might be derived from deeper facts about the "equality of persons"
Outbreak of beriberi among African union troops in Mogadishu, Somalia
Context and Objectives: In July 2009, WHO and partners were notified of a large outbreak of unknown illness, including deaths, among African Union (AU) soldiers in Mogadishu. Illnesses were characterized by peripheral edema, dyspnea, palpitations, and fever. Our objectives were to determine the cause of the outbreak, and to design and recommend control strategies.
Design, Setting, and Participants: The illness was defined as acute onset of lower limb edema, with dyspnea, chest pain, palpitations, nausea, vomiting, abdominal pain, or headache. Investigations in Nairobi and Mogadishu included clinical, epidemiologic, environmental, and laboratory studies. A case-control study was performed to identify risk factors for illness.
Results: From April 26, 2009 to May 1, 2010, 241 AU soldiers had lower limb edema and at least one additional symptom; four patients died. At least 52 soldiers were airlifted to hospitals in Kenya and Uganda. Four of 31 hospitalized patients in Kenya had right-sided heart failure with pulmonary hypertension. Initial laboratory investigations did not reveal hematologic, metabolic, infectious or toxicological abnormalities. Illness was associated with exclusive consumption of food provided to troops (not eating locally acquired foods) and a high level of insecurity (e.g., being exposed to enemy fire on a daily basis). Because the syndrome was clinically compatible with wet beriberi, thiamine was administered to ill soldiers, resulting in rapid and dramatic resolution. Blood samples taken from 16 cases prior to treatment showed increased levels of erythrocyte transketolase activation coefficient, consistent with thiamine deficiency. With mass thiamine supplementation for healthy troops, the number of subsequent beriberi cases decreased with no further deaths reported.
Conclusions: An outbreak of wet beriberi caused by thiamine deficiency due to restricted diet occurred among soldiers in a modern, well-equipped army. Vigilance to ensure adequate micronutrient intake must be a priority in populations completely dependent upon nutritional support from external sources
- …
