1,456 research outputs found

    A multiple hashing approach to complete identification of missing RFID tags

    Get PDF
    PublishedJournal ArticleOwing to its superior properties, such as fast identification and relatively long interrogating range over barcode systems, Radio Frequency Identification (RFID) technology has promising application prospects in inventory management. This paper studies the problem of complete identification of missing RFID tag, which is important in practice. Time efficiency is the key performance metric of missing tag identification. However, the existing protocols are ineffective in terms of execution time and can hardly satisfy the requirements of real-time applications. In this paper, a Multi-hashing based Missing Tag Identification (MMTI) protocol is proposed, which achieves better time efficiency by improving the utilization of the time frame used for identification. Specifically, the reader recursively sends bitmaps that reflect the current slot occupation state to guide the slot selection of the next hashing process, thereby changing more empty or collision slots to the expected singleton slots. We investigate the optimal parameter settings to maximize the performance of the MMTI protocol. Furthermore, we discuss the case of channel error and propose the countermeasures to make the MMTI workable in the scenarios with imperfect communication channels. Extensive simulation experiments are conducted to evaluate the performance of MMTI, and the results demonstrate that this new protocol significantly outperforms other related protocols reported in the current literature. © 2014 IEEE.This work was supported by NSFC (Grant No.s 60973117, 61173160, 61173162, 60903154, and 61321491), New Century Excellent Talents in University (NCET) of Ministry of Education of China, the National Science Foundation for Distinguished Young Scholars of China (Grant No. 61225010), and the Project funded by China Postdoctoral Science Foundation

    Completely pinpointing the missing RFID tags in a time-efficient way

    Get PDF
    PublishedJournal Article© 1968-2012 IEEE. Radio Frequency Identification (RFID) technology has been widely used in inventory management in many scenarios, e.g., warehouses, retail stores, hospitals, etc. This paper investigates a challenging problem of complete identification of missing tags in large-scale RFID systems. Although this problem has attracted extensive attention from academy and industry, the existing work can hardly satisfy the stringent real-time requirements. In this paper, a Slot Filter-based Missing Tag Identification (SFMTI) protocol is proposed to reconcile some expected collision slots into singleton slots and filter out the expected empty slots as well as the unreconcilable collision slots, thereby achieving the improved time-efficiency. The theoretical analysis is conducted to minimize the execution time of the proposed SFMTI. We then propose a cost-effective method to extend SFMTI to the multi-reader scenarios. The extensive simulation experiments and performance results demonstrate that the proposed SFMTI protocol outperforms the most promising Iterative ID-free Protocol (IIP) by reducing nearly 45% of the required execution time, and is just within a factor of 1.18 from the lower bound of the minimum execution time.This work was supported by NSFC (Grant Nos. 60973117, 61173160, 61173162, 60903154, and 61321491), New Century Excellent Talents in University (NCET) of Ministry of Education of China, the National Science Foundation for Distinguished Young Scholars of China (Grant No. 61225010), the Doctoral Fund of Ministry of Education of China (Grant No. 20130041110019), and the Project funded by China Postdoctoral Science Foundation

    Twin Paradox and the logical foundation of relativity theory

    Full text link
    We study the foundation of space-time theory in the framework of first-order logic (FOL). Since the foundation of mathematics has been successfully carried through (via set theory) in FOL, it is not entirely impossible to do the same for space-time theory (or relativity). First we recall a simple and streamlined FOL-axiomatization SpecRel of special relativity from the literature. SpecRel is complete with respect to questions about inertial motion. Then we ask ourselves whether we can prove usual relativistic properties of accelerated motion (e.g., clocks in acceleration) in SpecRel. As it turns out, this is practically equivalent to asking whether SpecRel is strong enough to "handle" (or treat) accelerated observers. We show that there is a mathematical principle called induction (IND) coming from real analysis which needs to be added to SpecRel in order to handle situations involving relativistic acceleration. We present an extended version AccRel of SpecRel which is strong enough to handle accelerated motion, in particular, accelerated observers. Among others, we show that the Twin Paradox becomes provable in AccRel, but it is not provable without IND.Comment: 24 pages, 6 figure

    Comparing theories: the dynamics of changing vocabulary. A case-study in relativity theory

    Full text link
    There are several first-order logic (FOL) axiomatizations of special relativity theory in the literature, all looking essentially different but claiming to axiomatize the same physical theory. In this paper, we elaborate a comparison, in the framework of mathematical logic, between these FOL theories for special relativity. For this comparison, we use a version of mathematical definability theory in which new entities can also be defined besides new relations over already available entities. In particular, we build an interpretation of the reference-frame oriented theory SpecRel into the observationally oriented Signalling theory of James Ax. This interpretation provides SpecRel with an operational/experimental semantics. Then we make precise, "quantitative" comparisons between these two theories via using the notion of definitional equivalence. This is an application of logic to the philosophy of science and physics in the spirit of Johan van Benthem's work.Comment: 27 pages, 8 figures. To appear in Springer Book series Trends in Logi

    SUMO-2 promotes mRNA translation by enhancing interaction between eIF4E and eIF4G

    Get PDF
    Small ubiquitin-like modifier (SUMO) proteins regulate many important eukaryotic cellular processes through reversible covalent conjugation to target proteins. In addition to its many well-known biological consequences, like subcellular translocation of protein, subnuclear structure formation, and modulation of transcriptional activity, we show here that SUMO-2 also plays a role in mRNA translation. SUMO-2 promoted formation of the active eukaryotic initiation factor 4F (eIF4F) complex by enhancing interaction between Eukaryotic Initiation Factor 4E (eIF4E) and Eukaryotic Initiation Factor 4G (eIF4G), and induced translation of a subset of proteins, such as cyclinD1 and c-myc, which essential for cell proliferation and apoptosis. As expected, overexpression of SUMO-2 can partially cancel out the disrupting effect of 4EGI-1, a small molecule inhibitor of eIF4E/eIF4G interaction, on formation of the eIF4F complex, translation of the cap-dependent protein, cell proliferation and apoptosis. On the other hand, SUMO-2 knockdown via shRNA partially impaired cap-dependent translation and cell proliferation and promoted apoptosis. These results collectively suggest that SUMO-2 conjugation plays a crucial regulatory role in protein synthesis. Thus, this report might contribute to the basic understanding of mammalian protein translation and sheds some new light on the role of SUMO in this process. © 2014 Chen et al

    Designing an automated clinical decision support system to match clinical practice guidelines for opioid therapy for chronic pain

    Get PDF
    Abstract Background Opioid prescribing for chronic pain is common and controversial, but recommended clinical practices are followed inconsistently in many clinical settings. Strategies for increasing adherence to clinical practice guideline recommendations are needed to increase effectiveness and reduce negative consequences of opioid prescribing in chronic pain patients. Methods Here we describe the process and outcomes of a project to operationalize the 2003 VA/DOD Clinical Practice Guideline for Opioid Therapy for Chronic Non-Cancer Pain into a computerized decision support system (DSS) to encourage good opioid prescribing practices during primary care visits. We based the DSS on the existing ATHENA-DSS. We used an iterative process of design, testing, and revision of the DSS by a diverse team including guideline authors, medical informatics experts, clinical content experts, and end-users to convert the written clinical practice guideline into a computable algorithm to generate patient-specific recommendations for care based upon existing information in the electronic medical record (EMR), and a set of clinical tools. Results The iterative revision process identified numerous and varied problems with the initially designed system despite diverse expert participation in the design process. The process of operationalizing the guideline identified areas in which the guideline was vague, left decisions to clinical judgment, or required clarification of detail to insure safe clinical implementation. The revisions led to workable solutions to problems, defined the limits of the DSS and its utility in clinical practice, improved integration into clinical workflow, and improved the clarity and accuracy of system recommendations and tools. Conclusions Use of this iterative process led to development of a multifunctional DSS that met the approval of the clinical practice guideline authors, content experts, and clinicians involved in testing. The process and experiences described provide a model for development of other DSSs that translate written guidelines into actionable, real-time clinical recommendations.http://deepblue.lib.umich.edu/bitstream/2027.42/78267/1/1748-5908-5-26.xmlhttp://deepblue.lib.umich.edu/bitstream/2027.42/78267/2/1748-5908-5-26.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/78267/3/1748-5908-5-26-S3.TIFFhttp://deepblue.lib.umich.edu/bitstream/2027.42/78267/4/1748-5908-5-26-S2.TIFFhttp://deepblue.lib.umich.edu/bitstream/2027.42/78267/5/1748-5908-5-26-S1.TIFFPeer Reviewe

    Diet assessment of the Atlantic Sea Nettle Chrysaora quinquecirrha in Barnegat Bay, New Jersey, using next-generation sequencing

    Get PDF
    Next-generation sequencing (NGS) methodologies have proven useful in deciphering the food items of generalist predators, but have yet to be applied to gelatinous animal gut and tentacle content. NGS can potentially supplement traditional methods of visual identification. Chrysaora quinquecirrha (Atlantic sea nettle) has progressively become more abundant in Mid-Atlantic United States’ estuaries including Barnegat Bay (New Jersey), potentially having detrimental effects on both marine organisms and human enterprises. Full characterization of this predator’s diet is essential for a comprehensive understanding of its impact on the food web and its management. Here, we tested the efficacy of NGS for prey item determination in the Atlantic sea nettle. We implemented a NGS ‘shotgun’ approach to randomly sequence DNA fragments isolated from gut lavages and gastric pouch/tentacle picks of eight and 84 sea nettles, respectively. These results were verified by visual identification and co-occurring plankton tows. Over 550 000 contigs were assembled from ~110 million paired-end reads. Of these, 100 contigs were confidently assigned to 23 different taxa, including soft-bodied organisms previously undocumented as prey species, including copepods, fish, ctenophores, anemones, amphipods, barnacles, shrimp, polychaete worms, flukes, flatworms, echinoderms, gastropods, bivalves and hemichordates. Our results not only indicate that a ‘shotgun’ NGS approach can supplement visual identification methods, but targeted enrichment of a specific amplicon/gene is not a prerequisite for identifying Atlantic sea nettle prey items

    Longitudinal analyses of the DNA methylome in deployed military servicemen identify susceptibility loci for post-traumatic stress disorder

    Get PDF
    In order to determine the impact of the epigenetic response to traumatic stress on post-traumatic stress disorder (PTSD), this study examined longitudinal changes of genome-wide blood DNA methylation profiles in relation to the development of PTSD symptoms in two prospective military cohorts (one discovery and one replication data set). In the first cohort consisting of male Dutch military servicemen (n=93), the emergence of PTSD symptoms over a deployment period to a combat zone was significantly associated with alterations in DNA methylation levels at 17 genomic positions and 12 genomic regions. Evidence for mediation of the relation between combat trauma and PTSD symptoms by longitudinal changes in DNA methylation was observed at several positions and regions. Bioinformatic analyses of the reported associations identified significant enrichment in several pathways relevant for symptoms of PTSD. Targeted analyses of the significant findings from the discovery sample in an independent prospective cohort of male US marines (n=98) replicated the observed relation between decreases in DNA methylation levels and PTSD symptoms at genomic regions in ZFP57, RNF39 and HIST1H2APS2. Together, our study pinpoints three novel genomic regions where longitudinal decreases in DNA methylation across the period of exposure to combat trauma marks susceptibility for PTSD

    Cosmic Flows on 100 Mpc/h Scales: Standardized Minimum Variance Bulk Flow, Shear and Octupole Moments

    Get PDF
    The low order moments, such as the bulk flow and shear, of the large scale peculiar velocity field are sensitive probes of the matter density fluctuations on very large scales. In practice, however, peculiar velocity surveys are usually sparse and noisy, which can lead to the aliasing of small scale power into what is meant to be a probe of the largest scales. Previously, we developed an optimal ``minimum variance'' (MV) weighting scheme, designed to overcome this problem by minimizing the difference between the measured bulk flow (BF) and that which would be measured by an ideal survey. Here we extend this MV analysis to include the shear and octupole moments, which are designed to have almost no correlations between them so that they are virtually orthogonal. We apply this MV analysis to a compilation of all major peculiar velocity surveys, consisting of 4536 measurements. Our estimate of the BF on scales of ~ 100 Mpc/h has a magnitude of |v|= 416 +/- 78 km/s towards Galactic l = 282 degree +/- 11 degree and b = 6 degree +/- 6 degree. This result is in disagreement with LCDM with WMAP5 cosmological parameters at a high confidence level, but is in good agreement with our previous MV result without an orthogonality constraint, showing that the shear and octupole moments did not contaminate the previous BF measurement. The shear and octupole moments are consistent with WMAP5 power spectrum, although the measurement noise is larger for these moments than for the BF. The relatively low shear moments suggest that the sources responsible for the BF are at large distances.Comment: 13 Pages, 7 figures, 4 tables. Some changes to reflect the published versio
    corecore