599 research outputs found

    THE HIGH CADENCE TRANSIENT SURVEY (HITS). I. SURVEY DESIGN AND SUPERNOVA SHOCK BREAKOUT CONSTRAINTS

    Get PDF
    Indexación: Web of Science; Scopus.We present the first results of the High Cadence Transient Survey (HiTS), a survey for which the objective is to detect and follow-up optical transients with characteristic timescales from hours to days, especially the earliest hours of supernova (SN) explosions. HiTS uses the Dark Energy Camera and a custom pipeline for image subtraction, candidate filtering and candidate visualization, which runs in real-time to be able to react rapidly to the new transients. We discuss the survey design, the technical challenges associated with the real-time analysis of these large volumes of data and our first results. In our 2013, 2014, and 2015 campaigns, we detected more than 120 young SN candidates, but we did not find a clear signature from the short-lived SN shock breakouts (SBOs) originating after the core collapse of red supergiant stars, which was the initial science aim of this survey. Using the empirical distribution of limiting magnitudes from our observational campaigns, we measured the expected recovery fraction of randomly injected SN light curves, which included SBO optical peaks produced with models from Tominaga et al. (2011) and Nakar & Sari (2010). From this analysis, we cannot rule out the models from Tominaga et al. (2011) under any reasonable distributions of progenitor masses, but we can marginally rule out the brighter and longer-lived SBO models from Nakar & Sari (2010) under our best-guess distribution of progenitor masses. Finally, we highlight the implications of this work for future massive data sets produced by astronomical observatories, such as LSST.http://iopscience.iop.org/article/10.3847/0004-637X/832/2/155/meta;jsessionid=76BDFFFE378003616F6DBA56A9225673.c4.iopscience.cld.iop.or

    Analytic philosophy for biomedical research: the imperative of applying yesterday's timeless messages to today's impasses

    Get PDF
    The mantra that "the best way to predict the future is to invent it" (attributed to the computer scientist Alan Kay) exemplifies some of the expectations from the technical and innovative sides of biomedical research at present. However, for technical advancements to make real impacts both on patient health and genuine scientific understanding, quite a number of lingering challenges facing the entire spectrum from protein biology all the way to randomized controlled trials should start to be overcome. The proposal in this chapter is that philosophy is essential in this process. By reviewing select examples from the history of science and philosophy, disciplines which were indistinguishable until the mid-nineteenth century, I argue that progress toward the many impasses in biomedicine can be achieved by emphasizing theoretical work (in the true sense of the word 'theory') as a vital foundation for experimental biology. Furthermore, a philosophical biology program that could provide a framework for theoretical investigations is outlined

    Recognizing recurrent neural networks (rRNN): Bayesian inference for recurrent neural networks

    Get PDF
    Recurrent neural networks (RNNs) are widely used in computational neuroscience and machine learning applications. In an RNN, each neuron computes its output as a nonlinear function of its integrated input. While the importance of RNNs, especially as models of brain processing, is undisputed, it is also widely acknowledged that the computations in standard RNN models may be an over-simplification of what real neuronal networks compute. Here, we suggest that the RNN approach may be made both neurobiologically more plausible and computationally more powerful by its fusion with Bayesian inference techniques for nonlinear dynamical systems. In this scheme, we use an RNN as a generative model of dynamic input caused by the environment, e.g. of speech or kinematics. Given this generative RNN model, we derive Bayesian update equations that can decode its output. Critically, these updates define a 'recognizing RNN' (rRNN), in which neurons compute and exchange prediction and prediction error messages. The rRNN has several desirable features that a conventional RNN does not have, for example, fast decoding of dynamic stimuli and robustness to initial conditions and noise. Furthermore, it implements a predictive coding scheme for dynamic inputs. We suggest that the Bayesian inversion of recurrent neural networks may be useful both as a model of brain function and as a machine learning tool. We illustrate the use of the rRNN by an application to the online decoding (i.e. recognition) of human kinematics

    A latent trait look at pretest-posttest validation of criterion-referenced test items

    Get PDF
    Since Cox and Vargas (1966) introduced their pretest-posttest validity index for criterion-referenced test items, a great number of additions and modifications have followed. All are based on the idea of gain scoring; that is, they are computed from the differences between proportions of pretest and posttest item responses. Although the method is simple and generally considered as the prototype of criterion-referenced item analysis, it has many and serious disadvantages. Some of these go back to the fact that it leads to indices based on a dual test administration- and population-dependent item p values. Others have to do with the global information about the discriminating power that these indices provide, the implicit weighting they suppose, and the meaningless maximization of posttest scores they lead to. Analyzing the pretest-posttest method from a latent trait point of view, it is proposed to replace indices like Cox and Vargas’ Dpp by an evaluation of the item information function for the mastery score. An empirical study was conducted to compare the differences in item selection between both methods

    Search for new phenomena in final states with an energetic jet and large missing transverse momentum in pp collisions at √ s = 8 TeV with the ATLAS detector

    Get PDF
    Results of a search for new phenomena in final states with an energetic jet and large missing transverse momentum are reported. The search uses 20.3 fb−1 of √ s = 8 TeV data collected in 2012 with the ATLAS detector at the LHC. Events are required to have at least one jet with pT > 120 GeV and no leptons. Nine signal regions are considered with increasing missing transverse momentum requirements between Emiss T > 150 GeV and Emiss T > 700 GeV. Good agreement is observed between the number of events in data and Standard Model expectations. The results are translated into exclusion limits on models with either large extra spatial dimensions, pair production of weakly interacting dark matter candidates, or production of very light gravitinos in a gauge-mediated supersymmetric model. In addition, limits on the production of an invisibly decaying Higgs-like boson leading to similar topologies in the final state are presente

    Applying Bayesian model averaging for uncertainty estimation of input data in energy modelling

    Get PDF
    Background Energy scenarios that are used for policy advice have ecological and social impact on society. Policy measures that are based on modelling exercises may lead to far reaching financial and ecological consequences. The purpose of this study is to raise awareness that energy modelling results are accompanied with uncertainties that should be addressed explicitly. Methods With view to existing approaches of uncertainty assessment in energy economics and climate science, relevant requirements for an uncertainty assessment are defined. An uncertainty assessment should be explicit, independent of the assessor’s expertise, applicable to different models, including subjective quantitative and statistical quantitative aspects, intuitively understandable and be reproducible. Bayesian model averaging for input variables of energy models is discussed as method that satisfies these requirements. A definition of uncertainty based on posterior model probabilities of input variables to energy models is presented. Results The main findings are that (1) expert elicitation as predominant assessment method does not satisfy all requirements, (2) Bayesian model averaging for input variable modelling meets the requirements and allows evaluating a vast amount of potentially relevant influences on input variables and (3) posterior model probabilities of input variable models can be translated in uncertainty associated with the input variable. Conclusions An uncertainty assessment of energy scenarios is relevant if policy measures are (partially) based on modelling exercises. Potential implications of these findings include that energy scenarios could be associated with uncertainty that is presently neither assessed explicitly nor communicated adequately

    The disruption of proteostasis in neurodegenerative diseases

    Get PDF
    Cells count on surveillance systems to monitor and protect the cellular proteome which, besides being highly heterogeneous, is constantly being challenged by intrinsic and environmental factors. In this context, the proteostasis network (PN) is essential to achieve a stable and functional proteome. Disruption of the PN is associated with aging and can lead to and/or potentiate the occurrence of many neurodegenerative diseases (ND). This not only emphasizes the importance of the PN in health span and aging but also how its modulation can be a potential target for intervention and treatment of human diseases.info:eu-repo/semantics/publishedVersio

    Herbivore-induced shifts in carbon and nitrogen allocation in red oak seedlings

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/65993/1/j.1469-8137.2008.02420.x.pd

    Two years follow-up study of the pain-relieving effect of gold bead implantation in dogs with hip-joint arthritis

    Get PDF
    Seventy-eight dogs with pain from hip dysplasia participated in a six-month placebo-controlled, double-blinded clinical trial of gold bead implantation. In the present, non-blinded study, 73 of these dogs were followed for an additional 18 months to evaluate the long-term pain-relieving effect of gold bead implantation. The recently-published results of the six month period revealed that 30 of the 36 dogs (83%) in the gold implantation group showed significant improvement (p = 0.02), included improved mobility and reduction in the signs of pain, compared to the placebo group (60% improvement). In the long-term two-year follow-up study, 66 of the 73 dogs had gold implantation and seven dogs continued as a control group. The 32 dogs in the original placebo group had gold beads implanted and were followed for a further 18 months. A certified veterinary acupuncturist used the same procedure to insert the gold beads as in the blinded study, and the owners completed the same type of detailed questionnaires. As in the blinded study, one investigator was responsible for all the assessments of each dog. The present study revealed that the pain-relieving effect of gold bead implantation observed in the blinded study continued throughout the two-year follow-up period

    Learning deterministic probabilistic automata from a model checking perspective

    Get PDF
    Probabilistic automata models play an important role in the formal design and analysis of hard- and software systems. In this area of applications, one is often interested in formal model-checking procedures for verifying critical system properties. Since adequate system models are often difficult to design manually, we are interested in learning models from observed system behaviors. To this end we adopt techniques for learning finite probabilistic automata, notably the Alergia algorithm. In this paper we show how to extend the basic algorithm to also learn automata models for both reactive and timed systems. A key question of our investigation is to what extent one can expect a learned model to be a good approximation for the kind of probabilistic properties one wants to verify by model checking. We establish theoretical convergence properties for the learning algorithm as well as for probability estimates of system properties expressed in linear time temporal logic and linear continuous stochastic logic. We empirically compare the learning algorithm with statistical model checking and demonstrate the feasibility of the approach for practical system verification
    corecore