1,831 research outputs found

    Measuring hearing in wild beluga whales

    Get PDF
    Author Posting. © The Author(s), 2016. This is the author's version of the work. It is posted here by permission of Springer for personal use, not for redistribution. The definitive version was published in "The Effects of Noise on Aquatic Life II," edited by Arthur N. Popper, Anthony Hawkins, 729-735. New York, NY: Springer, 2016. doi: 10.1007/978-1-4939-2981-8_88.We measured the hearing abilities of seven wild beluga whales (Delphinapterus leucas) during a collection-and-release experiment in Bristol Bay, AK, USA. Here we summarize the methods and initial data from one animal, discussing the implications of this experiment. Audiograms were collected from 4-150 kHz. The animal with the lowest threshold heard best at 80 kHz and demonstrated overall good hearing from 22-110 kHz. The robustness of the methodology and data suggest AEP audiograms can be incorporated into future collection-and-release health assessments. Such methods may provide high-quality results for multiple animals facilitating population-level audiograms and hearing measures in new species.Project funding and field support provided by Georgia Aquarium and the National Marine Mammal Laboratory of the Alaska Fisheries Science Center (NMML/AFSC). Field work also supported by National Marine Fisheries Service Alaska Regional Office (NMFS AKR), WHOI Arctic Research Initiative, WHOI Ocean Life Institute, U.S. Fish and Wildlife Service, Bristol Bay Native Association, Alaska SeaLife Center, Shedd Aquarium and Mystic Aquarium. Audiogram analyses were funded by the Office of Naval Research award number N000141210203 (from Michael Weise)

    Chronic non-specific low back pain - sub-groups or a single mechanism?

    Get PDF
    Copyright 2008 Wand and O'Connell; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.Background: Low back pain is a substantial health problem and has subsequently attracted a considerable amount of research. Clinical trials evaluating the efficacy of a variety of interventions for chronic non-specific low back pain indicate limited effectiveness for most commonly applied interventions and approaches. Discussion: Many clinicians challenge the results of clinical trials as they feel that this lack of effectiveness is at odds with their clinical experience of managing patients with back pain. A common explanation for this discrepancy is the perceived heterogeneity of patients with chronic non-specific low back pain. It is felt that the effects of treatment may be diluted by the application of a single intervention to a complex, heterogeneous group with diverse treatment needs. This argument presupposes that current treatment is effective when applied to the correct patient. An alternative perspective is that the clinical trials are correct and current treatments have limited efficacy. Preoccupation with sub-grouping may stifle engagement with this view and it is important that the sub-grouping paradigm is closely examined. This paper argues that there are numerous problems with the sub-grouping approach and that it may not be an important reason for the disappointing results of clinical trials. We propose instead that current treatment may be ineffective because it has been misdirected. Recent evidence that demonstrates changes within the brain in chronic low back pain sufferers raises the possibility that persistent back pain may be a problem of cortical reorganisation and degeneration. This perspective offers interesting insights into the chronic low back pain experience and suggests alternative models of intervention. Summary: The disappointing results of clinical research are commonly explained by the failure of researchers to adequately attend to sub-grouping of the chronic non-specific low back pain population. Alternatively, current approaches may be ineffective and clinicians and researchers may need to radically rethink the nature of the problem and how it should best be managed

    Evaluation of Quantitative EEG by Classification and Regression Trees to Characterize Responders to Antidepressant and Placebo Treatment

    Get PDF
    The study objective was to evaluate the usefulness of Classification and Regression Trees (CART), to classify clinical responders to antidepressant and placebo treatment, utilizing symptom severity and quantitative EEG (QEEG) data. Patients included 51 adults with unipolar depression who completed treatment trials using either fluoxetine, venlafaxine or placebo. Hamilton Depression Rating Scale (HAM-D) and single electrodes data were recorded at baseline, 2, 7, 14, 28 and 56 days. Patients were classified as medication and placebo responders or non-responders. CART analysis of HAM-D scores showed that patients with HAM-D scores lower than 13 by day 7 were more likely to be treatment responders to fluoxetine or venlafaxine compared to non-responders (p=0.001). Youden’s index γ revealed that CART models using QEEG measures were more accurate than HAM-D-based models. For patients given fluoxetine, patients with a decrease at day 2 in θ cordance at AF2 were classified by CART as treatment responders (p=0.02). For those receiving venlafaxine, CART identified a decrease in δ absolute power at day 7 at the PO2 region as characterizing treatment responders (p=0.01). Using all patients receiving medication, CART identified a decrease in δ absolute power at day 2 in the FP1 region as characteristic of nonresponse to medication (p=0.003). Optimal trees from the QEEG CART analysis primarily utilized cordance values, but also incorporated some δ absolute power values. The results of our study suggest that CART may be a useful method for identifying potential outcome predictors in the treatment of major depression

    Massive stars as thermonuclear reactors and their explosions following core collapse

    Full text link
    Nuclear reactions transform atomic nuclei inside stars. This is the process of stellar nucleosynthesis. The basic concepts of determining nuclear reaction rates inside stars are reviewed. How stars manage to burn their fuel so slowly most of the time are also considered. Stellar thermonuclear reactions involving protons in hydrostatic burning are discussed first. Then I discuss triple alpha reactions in the helium burning stage. Carbon and oxygen survive in red giant stars because of the nuclear structure of oxygen and neon. Further nuclear burning of carbon, neon, oxygen and silicon in quiescent conditions are discussed next. In the subsequent core-collapse phase, neutronization due to electron capture from the top of the Fermi sea in a degenerate core takes place. The expected signal of neutrinos from a nearby supernova is calculated. The supernova often explodes inside a dense circumstellar medium, which is established due to the progenitor star losing its outermost envelope in a stellar wind or mass transfer in a binary system. The nature of the circumstellar medium and the ejecta of the supernova and their dynamics are revealed by observations in the optical, IR, radio, and X-ray bands, and I discuss some of these observations and their interpretations.Comment: To be published in " Principles and Perspectives in Cosmochemistry" Lecture Notes on Kodai School on Synthesis of Elements in Stars; ed. by Aruna Goswami & Eswar Reddy, Springer Verlag, 2009. Contains 21 figure

    A copula model for marked point processes

    Get PDF
    The final publication (Diao, Liqun, Richard J. Cook, and Ker-Ai Lee. (2013) A copula model for marked point processes. Lifetime Data Analysis, 19(4): 463-489) is available at Springer via http://dx.doi.org/10.1007/s10985-013-9259-3Many chronic diseases feature recurring clinically important events. In addition, however, there often exists a random variable which is realized upon the occurrence of each event reflecting the severity of the event, a cost associated with it, or possibly a short term response indicating the effect of a therapeutic intervention. We describe a novel model for a marked point process which incorporates a dependence between continuous marks and the event process through the use of a copula function. The copula formulation ensures that event times can be modeled by any intensity function for point processes, and any multivariate model can be specified for the continuous marks. The relative efficiency of joint versus separate analyses of the event times and the marks is examined through simulation under random censoring. An application to data from a recent trial in transfusion medicine is given for illustration.Natural Sciences and Engineering Research Council of Canada (RGPIN 155849); Canadian Institutes for Health Research (FRN 13887); Canada Research Chair (Tier 1) – CIHR funded (950-226626

    Sensitive Detection of Plasmodium vivax Using a High-Throughput, Colourimetric Loop Mediated Isothermal Amplification (HtLAMP) Platform: A Potential Novel Tool for Malaria Elimination.

    Get PDF
    INTRODUCTION: Plasmodium vivax malaria has a wide geographic distribution and poses challenges to malaria elimination that are likely to be greater than those of P. falciparum. Diagnostic tools for P. vivax infection in non-reference laboratory settings are limited to microscopy and rapid diagnostic tests but these are unreliable at low parasitemia. The development and validation of a high-throughput and sensitive assay for P. vivax is a priority. METHODS: A high-throughput LAMP assay targeting a P. vivax mitochondrial gene and deploying colorimetric detection in a 96-well plate format was developed and evaluated in the laboratory. Diagnostic accuracy was compared against microscopy, antigen detection tests and PCR and validated in samples from malaria patients and community controls in a district hospital setting in Sabah, Malaysia. RESULTS: The high throughput LAMP-P. vivax assay (HtLAMP-Pv) performed with an estimated limit of detection of 1.4 parasites/ μL. Assay primers demonstrated cross-reactivity with P. knowlesi but not with other Plasmodium spp. Field testing of HtLAMP-Pv was conducted using 149 samples from symptomatic malaria patients (64 P. vivax, 17 P. falciparum, 56 P. knowlesi, 7 P. malariae, 1 mixed P. knowlesi/P. vivax, with 4 excluded). When compared against multiplex PCR, HtLAMP-Pv demonstrated a sensitivity for P. vivax of 95% (95% CI 87-99%); 61/64), and specificity of 100% (95% CI 86-100%); 25/25) when P. knowlesi samples were excluded. HtLAMP-Pv testing of 112 samples from asymptomatic community controls, 7 of which had submicroscopic P. vivax infections by PCR, showed a sensitivity of 71% (95% CI 29-96%; 5/7) and specificity of 93% (95% CI87-97%; 98/105). CONCLUSION: This novel HtLAMP-P. vivax assay has the potential to be a useful field applicable molecular diagnostic test for P. vivax infection in elimination settings

    Do birds of a feather flock together? Comparing habitat preferences of piscivorous waterbirds in a lowland river catchment

    Get PDF
    Waterbirds can move into and exploit new areas of suitable habitat outside of their native range. One such example is the little egret (Egretta garzetta), a piscivorous bird which has colonised southern Britain within the last 30 years. Yet, habitat use by little egrets within Britain, and how such patterns of habitat exploitation compare with native piscivores, remains unknown. We examine overlap in habitat preferences within a river catchment between the little egret and two native species, the grey heron (Ardea cinerea) and great cormorant (Phalacrocorax carbo). All species showed strong preferences for river habitat in all seasons, with other habitat types used as auxiliary feeding areas. Seasonal use of multiple habitat types is consistent with egret habitat use within its native range. We found strong egret preference for aquatic habitats, in particular freshwaters, compared with pasture and arable agricultural habitat. Egrets showed greater shared habitat preferences with herons, the native species to which egrets are most morphologically and functionally similar. This is the first study to quantify little egret habitat preferences outside of its native range

    Framework, principles and recommendations for utilising participatory methodologies in the co-creation and evaluation of public health interventions

    Get PDF
    Background: Due to the chronic disease burden on society, there is a need for preventive public health interventions to stimulate society towards a healthier lifestyle. To deal with the complex variability between individual lifestyles and settings, collaborating with end-users to develop interventions tailored to their unique circumstances has been suggested as a potential way to improve effectiveness and adherence. Co-creation of public health interventions using participatory methodologies has shown promise but lacks a framework to make this process systematic. The aim of this paper was to identify and set key principles and recommendations for systematically applying participatory methodologies to co-create and evaluate public health interventions. Methods: These principles and recommendations were derived using an iterative reflection process, combining key learning from published literature in addition to critical reflection on three case studies conducted by research groups in three European institutions, all of whom have expertise in co-creating public health interventions using different participatory methodologies. Results: Key principles and recommendations for using participatory methodologies in public health intervention co-creation are presented for the stages of: Planning (framing the aim of the study and identifying the appropriate sampling strategy); Conducting (defining the procedure, in addition to manifesting ownership); Evaluating (the process and the effectiveness) and Reporting (providing guidelines to report the findings). Three scaling models are proposed to demonstrate how to scale locally developed interventions to a population level. Conclusions: These recommendations aim to facilitate public health intervention co-creation and evaluation utilising participatory methodologies by ensuring the process is systematic and reproducible

    Measurement of the inclusive and dijet cross-sections of b-jets in pp collisions at sqrt(s) = 7 TeV with the ATLAS detector

    Get PDF
    The inclusive and dijet production cross-sections have been measured for jets containing b-hadrons (b-jets) in proton-proton collisions at a centre-of-mass energy of sqrt(s) = 7 TeV, using the ATLAS detector at the LHC. The measurements use data corresponding to an integrated luminosity of 34 pb^-1. The b-jets are identified using either a lifetime-based method, where secondary decay vertices of b-hadrons in jets are reconstructed using information from the tracking detectors, or a muon-based method where the presence of a muon is used to identify semileptonic decays of b-hadrons inside jets. The inclusive b-jet cross-section is measured as a function of transverse momentum in the range 20 < pT < 400 GeV and rapidity in the range |y| < 2.1. The bbbar-dijet cross-section is measured as a function of the dijet invariant mass in the range 110 < m_jj < 760 GeV, the azimuthal angle difference between the two jets and the angular variable chi in two dijet mass regions. The results are compared with next-to-leading-order QCD predictions. Good agreement is observed between the measured cross-sections and the predictions obtained using POWHEG + Pythia. MC@NLO + Herwig shows good agreement with the measured bbbar-dijet cross-section. However, it does not reproduce the measured inclusive cross-section well, particularly for central b-jets with large transverse momenta.Comment: 10 pages plus author list (21 pages total), 8 figures, 1 table, final version published in European Physical Journal

    The role of planetary formation and evolution in shaping the composition of exoplanetary atmospheres

    Get PDF
    Over the last twenty years, the search for extrasolar planets revealed us the rich diversity of the outcomes of the formation and evolution of planetary systems. In order to fully understand how these extrasolar planets came to be, however, the orbital and physical data we possess are not enough, and they need to be complemented with information on the composition of the exoplanets. Ground-based and space-based observations provided the first data on the atmospheric composition of a few extrasolar planets, but a larger and more detailed sample is required before we can fully take advantage of it. The primary goal of the Exoplanet Characterization Observatory (EChO) is to fill this gap, expanding the limited data we possess by performing a systematic survey of hundreds of extrasolar planets. The full exploitation of the data that EChO and other space-based and ground-based facilities will provide in the near future, however, requires the knowledge of what are the sources and sinks of the chemical species and molecules that will be observed. Luckily, the study of the past history of the Solar System provides several indications on the effects of processes like migration, late accretion and secular impacts, and on the time they occur in the life of planetary systems. In this work we will review what is already known about the factors influencing the composition of planetary atmospheres, focusing on the case of gaseous giant planets, and what instead still need to be investigated.Comment: 26 pages, 9 figures, 1 table. Accepted for publication on Experimental Astronomy, special issue on the M3 EChO mission candidat
    corecore