3,977 research outputs found
Fundamental Aspects of the ISM Fractality
The ubiquitous clumpy state of the ISM raises a fundamental and open problem
of physics, which is the correct statistical treatment of systems dominated by
long range interactions. A simple solvable hierarchical model is presented
which explains why systems dominated by gravity prefer to adopt a fractal
dimension around 2 or less, like the cold ISM and large scale structures. This
has direct relation with the general transparency, or blackness, of the
Universe.Comment: 6 pages, LaTeX2e, crckapb macro, no figure, uuencoded compressed tar
file. To be published in the proceeedings of the "Dust-Morphology"
conference, Johannesburg, 22-26 January, 1996, D. Block (ed.), (Kluwer
Dordrecht
Test Characteristics of Urinary Lipoarabinomannan and Predictors of Mortality among Hospitalized HIV-Infected Tuberculosis Suspects in Tanzania.
Tuberculosis is the most common cause of death among patients with HIV infection living in tuberculosis endemic countries, but many cases are not diagnosed pre-mortem. We assessed the test characteristics of urinary lipoarabinomannan (LAM) and predictors of mortality among HIV-associated tuberculosis suspects in Tanzania. We prospectively enrolled hospitalized HIV-infected patients in Dar es Salaam, with ≥2 weeks of cough or fever, or weight loss. Subjects gave 2 mLs of urine to test for LAM using a commercially available ELISA, ≥2 sputum specimens for concentrated AFB smear and solid media culture, and 40 mLs of blood for culture. Among 212 evaluable subjects, 143 (68%) were female; mean age was 36 years; and the median CD4 count 86 cells/mm(3). 69 subjects (33%) had culture confirmation of tuberculosis and 65 (31%) were LAM positive. For 69 cases of sputum or blood culture-confirmed tuberculosis, LAM sensitivity was 65% and specificity 86% compared to 36% and 98% for sputum smear. LAM test characteristics were not different in patients with bacteremia but showed higher sensitivity and lower specificity with decreasing CD4 cell count. Two month mortality was 64 (53%) of 121 with outcomes available. In multivariate analysis there was significant association of mortality with absence of anti-retroviral therapy (p = 0.004) and a trend toward association with a positive urine LAM (p = 0.16). Among culture-negative patients mortality was 9 (75%) of 12 in LAM positive patients and 27 (38%) of 71 in LAM negative patients (p = 0.02). Urine LAM is more sensitive than sputum smear and has utility for the rapid diagnosis of culture-confirmed tuberculosis in this high-risk population. Mortality data raise the possibility that urine LAM may also be a marker for culture-negative tuberculosis
Molecular Epidemiology of HIV-Associated Tuberculosis in Dar es Salaam, Tanzania: Strain Predominance, Clustering, and Polyclonal Disease.
Molecular typing of Mycobacterium tuberculosis can be used to elucidate the epidemiology of tuberculosis, including the rates of clustering, the frequency of polyclonal disease, and the distribution of genotypic families. We performed IS6110 typing and spoligotyping on M. tuberculosis strains isolated from HIV-infected subjects at baseline or during follow-up in the DarDar Trial in Tanzania and on selected community isolates. Clustering occurred in 203 (74%) of 275 subjects: 124 (80%) of 155 HIV-infected subjects with baseline isolates, 56 (69%) of 81 HIV-infected subjects with endpoint isolates, and 23 (59%) of 39 community controls. Overall, 113 (41%) subjects had an isolate representing the East Indian "GD" family. The rate of clustering was similar among vaccine and placebo recipients and among subjects with or without cellular immune responses to mycobacterial antigens. Polyclonal disease was detected in 6 (43%) of 14 patients with multiple specimens typed. Most cases of HIV-associated tuberculosis among subjects from this study in Dar es Salaam resulted from recently acquired infection. Polyclonal infection was detected and isolates representing the East Indian GD strain family were the most common
Physics in Riemann's mathematical papers
Riemann's mathematical papers contain many ideas that arise from physics, and
some of them are motivated by problems from physics. In fact, it is not easy to
separate Riemann's ideas in mathematics from those in physics. Furthermore,
Riemann's philosophical ideas are often in the background of his work on
science. The aim of this chapter is to give an overview of Riemann's
mathematical results based on physical reasoning or motivated by physics. We
also elaborate on the relation with philosophy. While we discuss some of
Riemann's philosophical points of view, we review some ideas on the same
subjects emitted by Riemann's predecessors, and in particular Greek
philosophers, mainly the pre-socratics and Aristotle. The final version of this
paper will appear in the book: From Riemann to differential geometry and
relativity (L. Ji, A. Papadopoulos and S. Yamada, ed.) Berlin: Springer, 2017
Decision Making for Inconsistent Expert Judgments Using Negative Probabilities
In this paper we provide a simple random-variable example of inconsistent
information, and analyze it using three different approaches: Bayesian,
quantum-like, and negative probabilities. We then show that, at least for this
particular example, both the Bayesian and the quantum-like approaches have less
normative power than the negative probabilities one.Comment: 14 pages, revised version to appear in the Proceedings of the QI2013
(Quantum Interactions) conferenc
Quantum Zeno Effect and Light-Dark Periods for a Single Atom
The quantum Zeno effect (QZE) predicts a slow-down of the time development of
a system under rapidly repeated ideal measurements, and experimentally this was
tested for an ensemble of atoms using short laser pulses for non-selective
state measurements. Here we consider such pulses for selective measurements on
a single system. Each probe pulse will cause a burst of fluorescence or no
fluorescence. If the probe pulses were strictly ideal measurements, the QZE
would predict periods of fluorescence bursts alternating with periods of no
fluorescence (light and dark periods) which would become longer and longer with
increasing frequency of the measurements. The non-ideal character of the
measurements is taken into account by incorporating the laser pulses in the
interaction, and this is used to determine the corrections to the ideal case.
In the limit, when the time between the laser pulses goes to zero, no freezing
occurs but instead we show convergence to the familiar macroscopic light and
dark periods of the continuously driven Dehmelt system. An experiment of this
type should be feasible for a single atom or ion in a trapComment: 16 pages, LaTeX, a4.sty; to appear in J. Phys.
Recommended from our members
On the origin of utility, weighting, and discounting functions: How they get their shapes and how to change their shapes
We present a theoretical account of the origin of the shapes of utility, probability weighting, and temporal discounting functions. In an experimental test of the theory, we systematically change the shape of revealed utility, weighting, and discounting functions by manipulating the distribution of monies, probabilities, and delays in the choices used to elicit them. The data demonstrate that there is no stable mapping between attribute values and their subjective equivalents. Expected and discounted utility theories, and also their descendants such as prospect theory and hyperbolic discounting theory, simply assert stable mappings to describe choice data and offer no account of the instability we find. We explain where the shape of the mapping comes from and, in describing the mechanism by which people choose, explain why the shape depends on the distribution of gains, losses, risks, and delays in the environment
The effect of time constraint on anticipation, decision making, and option generation in complex and dynamic environments
Researchers interested in performance in complex and dynamic situations have focused on how individuals predict their opponent(s) potential courses of action (i.e., during assessment) and generate potential options about how to respond (i.e., during intervention). When generating predictive options, previous research supports the use of cognitive mechanisms that are consistent with long-term working memory (LTWM) theory (Ericsson and Kintsch in Phychol Rev 102(2):211–245, 1995; Ward et al. in J Cogn Eng Decis Mak 7:231–254, 2013). However, when generating options about how to respond, the extant research supports the use of the take-the-first (TTF) heuristic (Johnson and Raab in Organ Behav Hum Decis Process 91:215–229, 2003). While these models provide possible explanations about how options are generated in situ, often under time pressure, few researchers have tested the claims of these models experimentally by explicitly manipulating time pressure. The current research investigates the effect of time constraint on option-generation behavior during the assessment and intervention phases of decision making by employing a modified version of an established option-generation task in soccer. The results provide additional support for the use of LTWM mechanisms during assessment across both time conditions. During the intervention phase, option-generation behavior appeared consistent with TTF, but only in the non-time-constrained condition. Counter to our expectations, the implementation of time constraint resulted in a shift toward the use of LTWM-type mechanisms during the intervention phase. Modifications to the cognitive-process level descriptions of decision making during intervention are proposed, and implications for training during both phases of decision making are discussed
On the role of different Skyrme forces and surface corrections in exotic cluster-decay
We present cluster decay studies of Ni formed in heavy-ion
collisions using different Skyrme forces. Our study reveals that different
Skyrme forces do not alter the transfer structure of fractional yields
significantly. The cluster decay half-lives of different clusters lies within
\pm 10% for PCM and \pm 15% for UFM.Comment: 13 pages,6 figures and 1 table; in press Pramana Journal of Physics
(2010
Massive stars as thermonuclear reactors and their explosions following core collapse
Nuclear reactions transform atomic nuclei inside stars. This is the process
of stellar nucleosynthesis. The basic concepts of determining nuclear reaction
rates inside stars are reviewed. How stars manage to burn their fuel so slowly
most of the time are also considered. Stellar thermonuclear reactions involving
protons in hydrostatic burning are discussed first. Then I discuss triple alpha
reactions in the helium burning stage. Carbon and oxygen survive in red giant
stars because of the nuclear structure of oxygen and neon. Further nuclear
burning of carbon, neon, oxygen and silicon in quiescent conditions are
discussed next. In the subsequent core-collapse phase, neutronization due to
electron capture from the top of the Fermi sea in a degenerate core takes
place. The expected signal of neutrinos from a nearby supernova is calculated.
The supernova often explodes inside a dense circumstellar medium, which is
established due to the progenitor star losing its outermost envelope in a
stellar wind or mass transfer in a binary system. The nature of the
circumstellar medium and the ejecta of the supernova and their dynamics are
revealed by observations in the optical, IR, radio, and X-ray bands, and I
discuss some of these observations and their interpretations.Comment: To be published in " Principles and Perspectives in Cosmochemistry"
Lecture Notes on Kodai School on Synthesis of Elements in Stars; ed. by Aruna
Goswami & Eswar Reddy, Springer Verlag, 2009. Contains 21 figure
- …
