48 research outputs found
Multi-messenger observations of a binary neutron star merger
On 2017 August 17 a binary neutron star coalescence candidate (later designated GW170817) with merger time 12:41:04 UTC was observed through gravitational waves by the Advanced LIGO and Advanced Virgo detectors. The Fermi Gamma-ray Burst Monitor independently detected a gamma-ray burst (GRB 170817A) with a time delay of ~1.7 s with respect to the merger time. From the gravitational-wave signal, the source was initially localized to a sky region of 31 deg2 at a luminosity distance of 40+8-8 Mpc and with component masses consistent with neutron stars. The component masses were later measured to be in the range 0.86 to 2.26 Mo. An extensive observing campaign was launched across the electromagnetic spectrum leading to the discovery of a bright optical transient (SSS17a, now with the IAU identification of AT 2017gfo) in NGC 4993 (at ~40 Mpc) less than 11 hours after the merger by the One- Meter, Two Hemisphere (1M2H) team using the 1 m Swope Telescope. The optical transient was independently detected by multiple teams within an hour. Subsequent observations targeted the object and its environment. Early ultraviolet observations revealed a blue transient that faded within 48 hours. Optical and infrared observations showed a redward evolution over ~10 days. Following early non-detections, X-ray and radio emission were discovered at the transient’s position ~9 and ~16 days, respectively, after the merger. Both the X-ray and radio emission likely arise from a physical process that is distinct from the one that generates the UV/optical/near-infrared emission. No ultra-high-energy gamma-rays and no neutrino candidates consistent with the source were found in follow-up searches. These observations support the hypothesis that GW170817 was produced by the merger of two neutron stars in NGC4993 followed by a short gamma-ray burst (GRB 170817A) and a kilonova/macronova powered by the radioactive decay of r-process nuclei synthesized in the ejecta
Comparative effectiveness of Cladribine tablets vs other drugs in relapsing-remitting multiple sclerosis: an approach merging randomized controlled trial with real life data
Combined antiretroviral therapy reduces hyperimmunoglobulinemia in HIV-1 infected children
Objective: To evaluate the effect of combined antiretroviral therapy on serum immunoglobulin (Ig) levels in HIV-1 perinatally infected children. Methods: Data from 1250 children recorded by the Italian Register for HIV Infection in Children from 1985 to 2002 were analysed. Since Ig levels physiologically vary with age, differences at different age periods were evaluated as differences in z-scores calculated using means and standard deviations of normal population for each age period. Combined antiretroviral therapy has become widespread in Italy since 1996, thus differences in Ig z-scores between the periods 1985-1995 and 1996-2002 were analysed. Data according to type of therapeutic regimen were also analysed. Results: Between the two periods 1985-1995 and 1996-2002, significant (P < 0.0001) decreases in IgG (6.29 ± 4.72 versus 4.44 ± 4.33), IgM (9.25 ± 13.32 versus 5.61 ± 7.93), and IgA (10.25 ± 15.68 versus 6.48 ± 11.56) z-scores, together with a parallel significant (P < 0.0001) increase in CD4 T-lymphocyte percentages, were found. These decreases were confirmed regardless of whether the children were receiving intravenous Ig or not. Ig z-scores were significantly higher in children receiving mono-therapy than in those receiving double-combined therapy (IgC, P < 0.0001; IgM, P = 0.003; IgA, P = 0.031) and in the latter children than in those receiving three or more drugs (P < 0.0001 for all z-scores). Ig z-scores correlated inversely with CD4 T-lymphocyte percentages and, directly, with viral loads. Conclusions: Our data show that in HIV-1 infected children combined antiretroviral therapy leads to reduction of hyperimmunoglobulinemia which parallels restoration of CD4 T-lymphocyte percentage and viral load decrease, which it turn probably reflects improved B-lymphocyte functions. © 2004 Lippincott Williams & Wilkins
Testing a global standard for quantifying species recovery and assessing conservation impact
Recognizing the imperative to evaluate species recovery and conservation impact, in 2012 the International Union for Conservation of Nature (IUCN) called for development of a “Green List of Species” (now the IUCN Green Status of Species). A draft Green Status framework for assessing species’ progress toward recovery, published in 2018, proposed 2 separate but interlinked components: a standardized method (i.e., measurement against benchmarks of species’ viability, functionality, and preimpact distribution) to determine current species recovery status (herein species recovery score) and application of that method to estimate past and potential future impacts of conservation based on 4 metrics (conservation legacy, conservation dependence, conservation gain, and recovery potential). We tested the framework with 181 species representing diverse taxa, life histories, biomes, and IUCN Red List categories (extinction risk). Based on the observed distribution of species’ recovery scores, we propose the following species recovery categories: fully recovered, slightly depleted, moderately depleted, largely depleted, critically depleted, extinct in the wild, and indeterminate. Fifty-nine percent of tested species were considered largely or critically depleted. Although there was a negative relationship between extinction risk and species recovery score, variation was considerable. Some species in lower risk categories were assessed as farther from recovery than those at higher risk. This emphasizes that species recovery is conceptually different from extinction risk and reinforces the utility of the IUCN Green Status of Species to more fully understand species conservation status. Although extinction risk did not predict conservation legacy, conservation dependence, or conservation gain, it was positively correlated with recovery potential. Only 1.7% of tested species were categorized as zero across all 4 of these conservation impact metrics, indicating that conservation has, or will, play a role in improving or maintaining species status for the vast majority of these species. Based on our results, we devised an updated assessment framework that introduces the option of using a dynamic baseline to assess future impacts of conservation over the short term to avoid misleading results which were generated in a small number of cases, and redefines short term as 10 years to better align with conservation planning. These changes are reflected in the IUCN Green Status of Species Standard
Dual-target-directed drugs that block monoamine oxidase B and adenosine A2A receptors for Parkinson’s disease
Warfarin or acenocoumarol: What is to be chosen in patients treated with oral anticoagulants?
Warfarin or acenocoumarol: which is better in the management of oral anticoagulants?
Abstract
Warfarin is employed more frequently than acenocoumarol because of its longer half-life (36 h), theoretically providing more stable anticoagulation, and avoiding factor VII fluctuations that potentially occur during acenocoumarol treatment (half-life 10 h). The aim of our study was to compare acenocoumarol with warfarin in the same group of 103 patients who started oral anticoagulation with acenocoumarol and then changed to warfarin. In these patients we compared the previous period of six months on acenocoumarol treatment (July-December 1996) with a new six-month period on warfarin (July-December 1997). We wished to know whether warfarin could improve the quality and the stability of oral anticoagulation of our patients and whether there was a difference between the two drugs in the weekly mean dose per patient. Moreover in order to detect the possible daily fluctuation of factor VII, we evaluated a further group of 54 patients. A subgroup of these patients was treated wi..
