12,681 research outputs found

    Fundamental Limits of "Ankylography" due to Dimensional Deficiency

    Full text link
    Single-shot diffractive imaging of truly 3D structures suffers from a dimensional deficiency and does not scale. The applicability of "ankylography" is limited to objects that are small-sized in at least one dimension or that are essentially 2D otherwise.Comment: 2 pages, no figur

    A Similarity Measure for GPU Kernel Subgraph Matching

    Full text link
    Accelerator architectures specialize in executing SIMD (single instruction, multiple data) in lockstep. Because the majority of CUDA applications are parallelized loops, control flow information can provide an in-depth characterization of a kernel. CUDAflow is a tool that statically separates CUDA binaries into basic block regions and dynamically measures instruction and basic block frequencies. CUDAflow captures this information in a control flow graph (CFG) and performs subgraph matching across various kernel's CFGs to gain insights to an application's resource requirements, based on the shape and traversal of the graph, instruction operations executed and registers allocated, among other information. The utility of CUDAflow is demonstrated with SHOC and Rodinia application case studies on a variety of GPU architectures, revealing novel thread divergence characteristics that facilitates end users, autotuners and compilers in generating high performing code

    Genome scan of Diabrotica virgifera virgifera for genetic variation associated with crop rotation tolerance

    Get PDF
    Crop rotation has been a valuable technique for control of Diabrotica virgifera virgifera for almost a century. However, during the last two decades, crop rotation has ceased to be effective in an expanding area of the US corn belt. This failure appears to be due to a change in the insect's oviposition behaviour, which, in all probability, has an underlying genetic basis. A preliminary genome scan using 253 amplified fragment-length polymorphism (AFLP) markers sought to identify genetic variation associated with the circumvention of crop rotation. Samples of D. v. virgifera from east-central Illinois, where crop rotation is ineffective, were compared with samples from Iowa at locations that the behavioural variant has yet to reach. A single AFLP marker showed signs of having been influenced by selection for the circumvention of crop rotation. However, this marker was not diagnostic. The lack of markers strongly associated with the trait may be due to an insufficient density of marker coverage throughout the genome. A weak but significant general heterogeneity was observed between the Illinois and Iowa samples at microsatellite loci and AFLP markers. This has not been detected in previous population genetic studies of D. v. virgifera and may indicate a reduction in gene flow between variant and wild-type beetles

    Removing exogenous information using pedigree data

    Full text link
    Management of certain populations requires the preservation of its pure genetic background. When, for different reasons, undesired alleles are introduced, the original genetic conformation must be recovered. The present study tested, through computer simulations, the power of recovery (the ability for removing the foreign information) from genealogical data. Simulated scenarios comprised different numbers of exogenous individuals taking partofthe founder population anddifferent numbers of unmanaged generations before the removal program started. Strategies were based on variables arising from classical pedigree analyses such as founders? contribution and partial coancestry. The ef?ciency of the different strategies was measured as the proportion of native genetic information remaining in the population. Consequences on the inbreeding and coancestry levels of the population were also evaluated. Minimisation of the exogenous founders? contributions was the most powerful method, removing the largest amount of genetic information in just one generation.However, as a side effect, it led to the highest values of inbreeding. Scenarios with a large amount of initial exogenous alleles (i.e. high percentage of non native founders), or many generations of mixing became very dif?cult to recover, pointing out the importance of being careful about introgression events in populatio

    Evaluating Maintainability Prejudices with a Large-Scale Study of Open-Source Projects

    Full text link
    Exaggeration or context changes can render maintainability experience into prejudice. For example, JavaScript is often seen as least elegant language and hence of lowest maintainability. Such prejudice should not guide decisions without prior empirical validation. We formulated 10 hypotheses about maintainability based on prejudices and test them in a large set of open-source projects (6,897 GitHub repositories, 402 million lines, 5 programming languages). We operationalize maintainability with five static analysis metrics. We found that JavaScript code is not worse than other code, Java code shows higher maintainability than C# code and C code has longer methods than other code. The quality of interface documentation is better in Java code than in other code. Code developed by teams is not of higher and large code bases not of lower maintainability. Projects with high maintainability are not more popular or more often forked. Overall, most hypotheses are not supported by open-source data.Comment: 20 page

    Toward optimal implementation of cancer prevention and control programs in public health: A study protocol on mis-implementation

    Get PDF
    Abstract Background Much of the cancer burden in the USA is preventable, through application of existing knowledge. State-level funders and public health practitioners are in ideal positions to affect programs and policies related to cancer control. Mis-implementation refers to ending effective programs and policies prematurely or continuing ineffective ones. Greater attention to mis-implementation should lead to use of effective interventions and more efficient expenditure of resources, which in the long term, will lead to more positive cancer outcomes. Methods This is a three-phase study that takes a comprehensive approach, leading to the elucidation of tactics for addressing mis-implementation. Phase 1: We assess the extent to which mis-implementation is occurring among state cancer control programs in public health. This initial phase will involve a survey of 800 practitioners representing all states. The programs represented will span the full continuum of cancer control, from primary prevention to survivorship. Phase 2: Using data from phase 1 to identify organizations in which mis-implementation is particularly high or low, the team will conduct eight comparative case studies to get a richer understanding of mis-implementation and to understand contextual differences. These case studies will highlight lessons learned about mis-implementation and identify hypothesized drivers. Phase 3: Agent-based modeling will be used to identify dynamic interactions between individual capacity, organizational capacity, use of evidence, funding, and external factors driving mis-implementation. The team will then translate and disseminate findings from phases 1 to 3 to practitioners and practice-related stakeholders to support the reduction of mis-implementation. Discussion This study is innovative and significant because it will (1) be the first to refine and further develop reliable and valid measures of mis-implementation of public health programs; (2) bring together a strong, transdisciplinary team with significant expertise in practice-based research; (3) use agent-based modeling to address cancer control implementation; and (4) use a participatory, evidence-based, stakeholder-driven approach that will identify key leverage points for addressing mis-implementation among state public health programs. This research is expected to provide replicable computational simulation models that can identify leverage points and public health system dynamics to reduce mis-implementation in cancer control and may be of interest to other health areas

    Implementation of routine outcome measurement in child and adolescent mental health services in the United Kingdom: a critical perspective

    Get PDF
    The aim of this commentary is to provide an overview of clinical outcome measures that are currently recommended for use in UK Child and Adolescent Mental Health Services (CAMHS), focusing on measures that are applicable across a wide range of conditions with established validity and reliability, or innovative in their design. We also provide an overview of the barriers and drivers to the use of Routine Outcome Measurement (ROM) in clinical practice

    Secular Evolution of Galaxy Morphologies

    Get PDF
    Today we have numerous evidences that spirals evolve dynamically through various secular or episodic processes, such as bar formation and destruction, bulge growth and mergers, sometimes over much shorter periods than the standard galaxy age of 10-15 Gyr. This, coupled to the known properties of the Hubble sequence, leads to a unique sense of evolution: from Sm to Sa. Linking this to the known mass components provides new indications on the nature of dark matter in galaxies. The existence of large amounts of yet undetected dark gas appears as the most natural option. Bounds on the amount of dark stars can be given since their formation is mostly irreversible and requires obviously a same amount of gas.Comment: 8 pages, Latex2e, crckapb.sty macros, 1 Postscript figure, replaced with TeX source; To be published in the proceeedings of the "Dust-Morphology" conference, Johannesburg, 22-26 January, 1996, D. Block (ed.), (Kluwer Dordrecht

    The Pioneer Anomaly

    Get PDF
    Radio-metric Doppler tracking data received from the Pioneer 10 and 11 spacecraft from heliocentric distances of 20-70 AU has consistently indicated the presence of a small, anomalous, blue-shifted frequency drift uniformly changing with a rate of ~6 x 10^{-9} Hz/s. Ultimately, the drift was interpreted as a constant sunward deceleration of each particular spacecraft at the level of a_P = (8.74 +/- 1.33) x 10^{-10} m/s^2. This apparent violation of the Newton's gravitational inverse-square law has become known as the Pioneer anomaly; the nature of this anomaly remains unexplained. In this review, we summarize the current knowledge of the physical properties of the anomaly and the conditions that led to its detection and characterization. We review various mechanisms proposed to explain the anomaly and discuss the current state of efforts to determine its nature. A comprehensive new investigation of the anomalous behavior of the two Pioneers has begun recently. The new efforts rely on the much-extended set of radio-metric Doppler data for both spacecraft in conjunction with the newly available complete record of their telemetry files and a large archive of original project documentation. As the new study is yet to report its findings, this review provides the necessary background for the new results to appear in the near future. In particular, we provide a significant amount of information on the design, operations and behavior of the two Pioneers during their entire missions, including descriptions of various data formats and techniques used for their navigation and radio-science data analysis. As most of this information was recovered relatively recently, it was not used in the previous studies of the Pioneer anomaly, but it is critical for the new investigation.Comment: 165 pages, 40 figures, 16 tables; accepted for publication in Living Reviews in Relativit

    A Dynamic Model of Interactions of Ca^(2+), Calmodulin, and Catalytic Subunits of Ca^(2+)/Calmodulin-Dependent Protein Kinase II

    Get PDF
    During the acquisition of memories, influx of Ca^(2+) into the postsynaptic spine through the pores of activated N-methyl-D-aspartate-type glutamate receptors triggers processes that change the strength of excitatory synapses. The pattern of Ca^(2+) influx during the first few seconds of activity is interpreted within the Ca^(2+)-dependent signaling network such that synaptic strength is eventually either potentiated or depressed. Many of the critical signaling enzymes that control synaptic plasticity, including Ca^(2+)/calmodulin-dependent protein kinase II (CaMKII), are regulated by calmodulin, a small protein that can bind up to 4 Ca^(2+) ions. As a first step toward clarifying how the Ca^(2+)-signaling network decides between potentiation or depression, we have created a kinetic model of the interactions of Ca^(2+), calmodulin, and CaMKII that represents our best understanding of the dynamics of these interactions under conditions that resemble those in a postsynaptic spine. We constrained parameters of the model from data in the literature, or from our own measurements, and then predicted time courses of activation and autophosphorylation of CaMKII under a variety of conditions. Simulations showed that species of calmodulin with fewer than four bound Ca^(2+) play a significant role in activation of CaMKII in the physiological regime, supporting the notion that processing ofCa^(2+) signals in a spine involves competition among target enzymes for binding to unsaturated species of CaM in an environment in which the concentration of Ca^(2+) is fluctuating rapidly. Indeed, we showed that dependence of activation on the frequency of Ca^(2+) transients arises from the kinetics of interaction of fluctuating Ca^(2+) with calmodulin/CaMKII complexes. We used parameter sensitivity analysis to identify which parameters will be most beneficial to measure more carefully to improve the accuracy of predictions. This model provides a quantitative base from which to build more complex dynamic models of postsynaptic signal transduction during learning
    corecore