450 research outputs found
Reconstructing magnetic fields of spiral galaxies from radiopolarimetric observations
We live in a magnetic universe with magnetic fields spanning an enormous range of spatial and temporal scales.
In particular, magnetic fields at the scale of a galaxy are known as galactic magnetic fields and are the focus of this PhD thesis.
These galactic magnetic fields are very important since they affect the dynamics of the interstellar gas as well as the gas distribution.
The presence of these magnetic fields induces a certain type of radiation to occur at radio frequencies known as synchrotron radiation.
The observed polarization properties of this synchrotron radiation then serves to record the imprint of these magnetic fields.
The goal of this thesis has been to infer the structure of the magnetic field across various spatial scales in our own Galaxy as well as the strength and structure of the magnetic field in other galaxies using radiopolarimetric observations.GalaxiesHigh Energy Astrophysic
Probabilistic prediction of Dst storms one-day-ahead using Full-Disk SoHO Images
We present a new model for the probability that the Disturbance storm time
(Dst) index exceeds -100 nT, with a lead time between 1 and 3 days.
provides essential information about the strength of the ring current around
the Earth caused by the protons and electrons from the solar wind, and it is
routinely used as a proxy for geomagnetic storms. The model is developed using
an ensemble of Convolutional Neural Networks (CNNs) that are trained using SoHO
images (MDI, EIT and LASCO). The relationship between the SoHO images and the
solar wind has been investigated by many researchers, but these studies have
not explicitly considered using SoHO images to predict the index.
This work presents a novel methodology to train the individual models and to
learn the optimal ensemble weights iteratively, by using a customized
class-balanced mean square error (CB-MSE) loss function tied to a least-squares
(LS) based ensemble.
The proposed model can predict the probability that Dst<-100 nT 24 hours
ahead with a True Skill Statistic (TSS) of 0.62 and Matthews Correlation
Coefficient (MCC) of 0.37. The weighted TSS and MCC from Guastavino et al.
(2021) is 0.68 and 0.47, respectively. An additional validation during
non-Earth-directed CME periods is also conducted which yields a good TSS and
MCC score.Comment: accepted by journal <Space Weather
Total Serum Bilirubin within 3 Months of Hepatoportoenterostomy Predicts Short-Term Outcomes in Biliary Atresia
OBJECTIVES:
To prospectively assess the value of serum total bilirubin (TB) within 3 months of hepatoportoenterostomy (HPE) in infants with biliary atresia as a biomarker predictive of clinical sequelae of liver disease in the first 2 years of life.
STUDY DESIGN:
Infants with biliary atresia undergoing HPE between June 2004 and January 2011 were enrolled in a prospective, multicenter study. Complications were monitored until 2 years of age or the earliest of liver transplantation (LT), death, or study withdrawal. TB below 2 mg/dL (34.2 μM) at any time in the first 3 months (TB <2.0, all others TB ≥ 2) after HPE was examined as a biomarker, using Kaplan-Meier survival and logistic regression.
RESULTS:
Fifty percent (68/137) of infants had TB < 2.0 in the first 3 months after HPE. Transplant-free survival at 2 years was significantly higher in the TB < 2.0 group vs TB ≥ 2 (86% vs 20%, P < .0001). Infants with TB ≥ 2 had diminished weight gain (P < .0001), greater probability of developing ascites (OR 6.4, 95% CI 2.9-14.1, P < .0001), hypoalbuminemia (OR 7.6, 95% CI 3.2-17.7, P < .0001), coagulopathy (OR 10.8, 95% CI 3.1-38.2, P = .0002), LT (OR 12.4, 95% CI 5.3-28.7, P < .0001), or LT or death (OR 16.8, 95% CI 7.2-39.2, P < .0001).
CONCLUSIONS:
Infants whose TB does not fall below 2.0 mg/dL within 3 months of HPE were at high risk for early disease progression, suggesting they should be considered for LT in a timely fashion. Interventions increasing the likelihood of achieving TB <2.0 mg/dL within 3 months of HPE may enhance early outcomes
Extrahepatic Anomalies in Infants With Biliary Atresia: Results of a Large Prospective North American Multicenter Study
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/100275/1/hep26512.pd
Pediatric Transplantation in the United States, 1997–2006
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/73373/1/j.1600-6143.2008.02172.x.pd
PAAR-repeat proteins sharpen and diversify the Type VI secretion system spike
The bacterial type VI secretion system (T6SS) is a large multi-component, dynamic macromolecular machine that plays an important role in the ecology of many Gram negative bacteria. T6SS is responsible for translocation of a wide range of toxic effector molecules allowing predatory cells to kill both prokaryotic as well as eukaryotic prey cells1-5. The T6SS organelle is functionally analogous to contractile tails of bacteriophages and is thought to attack cells by initially penetrating them with a trimeric protein complex called the VgrG spike6,7. Neither the exact protein composition of the T6SS organelle nor the mechanisms of effector selection and delivery are known. Here we report that proteins from the PAAR (Proline-Alanine-Alanine-aRginine) repeat superfamily form a sharp conical extension on the VgrG spike, which is further involved in attaching effector domains to the spike. The crystal structures of two PAAR-repeat proteins bound to VgrG-like partners show that these proteins function to sharpen the tip of the VgrG spike. We demonstrate that PAAR proteins are essential for T6SS- mediated secretion and target cell killing by Vibrio cholerae and Acinetobacter baylyi. Our results suggest a new model of the T6SS organelle in which the VgrG-PAAR spike complex is decorated with multiple effectors that are delivered simultaneously into target cells in a single contraction-driven translocation event
Probabilistic Super-Resolution of Solar Magnetograms: Generating Many Explanations and Measuring Uncertainties
Machine learning techniques have been successfully applied to super-resolution tasks on natural images where visually pleasing results are sufficient. However in many scientific domains this is not adequate and estimations of errors and uncertainties are crucial. To address this issue we propose a Bayesian framework that decomposes uncertainties into epistemic and aleatoric uncertainties. We test the validity of our approach by super-resolving images of the Sun's magnetic field and by generating maps measuring the range of possible high resolution explanations compatible with a given low resolution magnetogram
Single-Frame Super-Resolution of Solar Magnetograms: Investigating Physics-Based Metrics \& Losses
Breakthroughs in our understanding of physical phenomena have traditionally followed improvements in instrumentation. Studies of the magnetic field of the Sun, and its influence on the solar dynamo and space weather events, have benefited from improvements in resolution and measurement frequency of new instruments. However, in order to fully understand the solar cycle, high-quality data across time-scales longer than the typical lifespan of a solar instrument are required. At the moment, discrepancies between measurement surveys prevent the combined use of all available data. In this work, we show that machine learning can help bridge the gap between measurement surveys by learning to \textbf{super-resolve} low-resolution magnetic field images and \textbf{translate} between characteristics of contemporary instruments in orbit. We also introduce the notion of physics-based metrics and losses for super-resolution to preserve underlying physics and constrain the solution space of possible super-resolution outputs
Recommended from our members
Parallel computing in enterprise modeling.
This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity
Ideas for Improving the Field of Machine Learning: Summarizing Discussion from the NeurIPS 2019 Retrospectives Workshop
This report documents ideas for improving the field of machine learning, which arose from discussions at the ML Retrospectives workshop at NeurIPS 2019. The goal of the report is to disseminate these ideas more broadly, and in turn encourage continuing discussion about how the field could improve along these axes. We focus on topics that were most discussed at the workshop: incentives for encouraging alternate forms of scholarship, re-structuring the review process, participation from academia and industry, and how we might better train computer scientists as scientists. Videos from the workshop can be accessed at https://slideslive.com/neurips/west-114-115-retrospectives-a-venue-for-selfreflection-in-ml-researc
- …
