3,067 research outputs found

    Fast and accurate classification of echocardiograms using deep learning

    Get PDF
    Echocardiography is essential to modern cardiology. However, human interpretation limits high throughput analysis, limiting echocardiography from reaching its full clinical and research potential for precision medicine. Deep learning is a cutting-edge machine-learning technique that has been useful in analyzing medical images but has not yet been widely applied to echocardiography, partly due to the complexity of echocardiograms' multi view, multi modality format. The essential first step toward comprehensive computer assisted echocardiographic interpretation is determining whether computers can learn to recognize standard views. To this end, we anonymized 834,267 transthoracic echocardiogram (TTE) images from 267 patients (20 to 96 years, 51 percent female, 26 percent obese) seen between 2000 and 2017 and labeled them according to standard views. Images covered a range of real world clinical variation. We built a multilayer convolutional neural network and used supervised learning to simultaneously classify 15 standard views. Eighty percent of data used was randomly chosen for training and 20 percent reserved for validation and testing on never seen echocardiograms. Using multiple images from each clip, the model classified among 12 video views with 97.8 percent overall test accuracy without overfitting. Even on single low resolution images, test accuracy among 15 views was 91.7 percent versus 70.2 to 83.5 percent for board-certified echocardiographers. Confusional matrices, occlusion experiments, and saliency mapping showed that the model finds recognizable similarities among related views and classifies using clinically relevant image features. In conclusion, deep neural networks can classify essential echocardiographic views simultaneously and with high accuracy. Our results provide a foundation for more complex deep learning assisted echocardiographic interpretation.Comment: 31 pages, 8 figure

    Negative Statements Considered Useful

    No full text
    Knowledge bases (KBs), pragmatic collections of knowledge about notable entities, are an important asset in applications such as search, question answering and dialogue. Rooted in a long tradition in knowledge representation, all popular KBs only store positive information, while they abstain from taking any stance towards statements not contained in them. In this paper, we make the case for explicitly stating interesting statements which are not true. Negative statements would be important to overcome current limitations of question answering, yet due to their potential abundance, any effort towards compiling them needs a tight coupling with ranking. We introduce two approaches towards compiling negative statements. (i) In peer-based statistical inferences, we compare entities with highly related entities in order to derive potential negative statements, which we then rank using supervised and unsupervised features. (ii) In query-log-based text extraction, we use a pattern-based approach for harvesting search engine query logs. Experimental results show that both approaches hold promising and complementary potential. Along with this paper, we publish the first datasets on interesting negative information, containing over 1.1M statements for 100K popular Wikidata entities

    Towards reducing traffic congestion using cooperative adaptive cruise control on a freeway with a ramp

    Get PDF
    Purpose: In this paper, the impact of Cooperative Adaptive Cruise Control (CACC) systems on traffic performance is examined using microscopic agent-based simulation. Using a developed traffic simulation model of a freeway with an on-ramp - created to induce perturbations and to trigger stop-and-go traffic, the CACC system’s effect on the traffic performance is studied. The previously proposed traffic simulation model is extended and validated. By embedding CACC vehicles in different penetration levels, the results show significance and indicate the potential of CACC systems to improve traffic characteristics and therefore can be used to reduce traffic congestion. The study shows that the impact of CACC is positive but is highly dependent on the CACC market penetration. The flow rate of the traffic using CACC is proportional to the market penetration rate of CACC equipped vehicles and the density of the traffic. Design/methodology/approach: This paper uses microscopic simulation experiments followed by a quantitative statistical analysis. Simulation enables researchers manipulating the system variables to straightforwardly predict the outcome on the overall system, giving researchers the unique opportunity to interfere and make improvements to performance. Thus with simulation, changes to variables that might require excessive time, or be unfeasible to carry on real systems, are often completed within seconds. Findings: The findings of this paper are summarized as follow: • Provide and validate a platform (agent-based microscopic traffic simulator) in which any CACC algorithm (current or future) may be evaluated. • Provide detailed analysis associated with implementation of CACC vehicles on freeways. • Investigate whether embedding CACC vehicles on freeways has a significant positive impact or not. Research limitations/implications: The main limitation of this research is that it has been conducted solely in a computer laboratory. Laboratory experiments and/or simulations provide a controlled setting, well suited for preliminary testing and calibrating of the input variables. However, laboratory testing is by no means sufficient for the entire methodology validation. It must be complemented by fundamental field testing. As far as the simulation model limitations, accidents, weather conditions, and obstacles in the roads were not taken into consideration. Failures in the operation of the sensors and communication of CACC design equipment were also not considered. Additionally, the special HOV lanes were limited to manual vehicles and CACC vehicles. Emergency vehicles, buses, motorcycles, and other type of vehicles were not considered in this dissertation. Finally, it is worthy to note that the human factor is far more sophisticated, hard to predict, and flexible to be exactly modeled in a traffic simulation model perfectly. Some human behavior could occur in real life that the simulation model proposed would fail to model. Practical implications: A high percentage of CACC market penetration is not occurring in the near future. Thus, reaching a high penetration will always be a challenge for this type of research. The public accessibility for such a technology will always be a major practical challenge. With such a small headway safety gap, even if the technology was practically proven to be efficient and safe, having the public to accept it and feel comfortable in using it will always be a challenge facing the success of the CACC technology. Originality/value: The literature on the impact of CACC on traffic dynamics is limited. In addition, no previous work has proposed an open-source microscopic traffic simulator where different CACC algorithms could be easily used and tested. We believe that the proposed model is more realistic than other traffic models, and is one of the very first models to model the behavior CACC vehicles on freeways.Peer Reviewe

    Reducing Traffic Congestions by Introducing CACC-Vehicles on a Multi-Lane Highway Using Agent-Based Approach

    Get PDF
    Traffic congestion is an ongoing problem of great interest to researchers from different areas in academia. With the emerging technology for inter-vehicle communication, vehicles have the ability to exchange information with predecessors by wireless communication. In this paper, we present an agent-based model of traffic congestion and examine the impact of having CACC (Cooperative Adaptive Cruise Control) embedded vehicle(s) on a highway system consisting of 4 traffic lanes without overtaking. In our model, CACC vehicles adapt their acceleration/deceleration according to vehicle-to-vehicle inter-communication. We analyze the average speed of the cars, the shockwaves, and the evolution of traffic congestion throughout the lifecycle of the model. The study identifies how CACC vehicles affect the dynamics of traffic flow on a complex network and reduce the oscillatory behavior (stop and go) resulting from the acceleration/deceleration of the vehicles

    Improving Customer Waiting Time at a DMV Center Using Discrete-Event Simulation

    Get PDF
    Virginia's Department of Motor Vehicles (DMV) serves a customer base of approximately 5.6 million licensed drivers and ID card holders and 7 million registered vehicle owners. DMV has more daily face-to-face contact with Virginia's citizens than any other state agency [1]. The DMV faces a major difficulty in keeping up with the excessively large customers' arrival rate. The consequences are queues building up, stretching out to the entrance doors (and sometimes even outside) and customers complaining. While the DMV state employees are trying to serve at their fastest pace, the remarkably large queues indicate that there is a serious problem that the DMV faces in its services, which must be dealt with rapidly. Simulation is considered as one of the best tools for evaluating and improving complex systems. In this paper, we use it to model one of the DMV centers located in Norfolk, VA. The simulation model is modeled in Arena 10.0 from Rockwell systems. The data used is collected from experts of the DMV Virginia headquarter located in Richmond. The model created was verified and validated. The intent of this study is to identify key problems causing the delays at the DMV centers and suggest possible solutions to minimize the customers' waiting time. In addition, two tentative hypotheses aiming to improve the model's design are tested and validated

    A role for Syk-kinase in the control of the binding cycle of the β2 integrins (CD11/CD18) in human polymorphonuclear neutrophils

    Get PDF
    A fine control of β2 integrin (CD11/CD18)-mediated firm adhesion of human neutrophils to the endothelial cell monolayer is required to allow ordered emigration. To elucidate the molecular mechanisms that control this process, intracellular protein tyrosine signaling subsequent to β2 integrin-mediated ligand binding was studied by immunoprecipitation and Western blotting techniques. The 72-kDa Syk-kinase, which was tyrosine-phosphorylated upon adhesion, was found to coprecipitate with CD18, the β-subunit of the β2 integrins. Moreover, inhibition of Syk-kinase by piceatannol enhanced adhesion and spreading but diminished N-formyl-Met-Leu-Phe-induced chemotactic migration. The enhancement of adhesiveness was associated with integrin clustering, which results in increased integrin avidity. In contrast, piceatannol had no effect on the surface expression or on the affinity of β2 integrins. Altogether, this suggests that Syk-kinase controls alternation of β2 integrin-mediated ligand binding with integrin detachment

    The Landscape of Inappropriate Laboratory Testing: A 15-Year Meta-Analysis

    Get PDF
    Background: Laboratory testing is the single highest-volume medical activity and drives clinical decision-making across medicine. However, the overall landscape of inappropriate testing, which is thought to be dominated by repeat testing, is unclear. Systematic differences in initial vs. repeat testing, measurement criteria, and other factors would suggest new priorities for improving laboratory testing. Methods: A multi-database systematic review was performed on published studies from 1997–2012 using strict inclusion and exclusion criteria. Over- vs. underutilization, initial vs. repeat testing, low- vs. high-volume testing, subjective vs. objective appropriateness criteria, and restrictive vs. permissive appropriateness criteria, among other factors, were assessed. Results: Overall mean rates of over- and underutilization were 20.6% (95% CI 16.2–24.9%) and 44.8% (95% CI 33.8–55.8%). Overutilization during initial testing (43.9%; 95% CI 35.4–52.5%) was six times higher than during repeat testing (7.4%; 95% CI 2.5–12.3%; P for stratum difference <0.001). Overutilization of low-volume tests (32.2%; 95% CI 25.0–39.4%) was three times that of high-volume tests (10.2%; 95% CI 2.6–17.7%; P<0.001). Overutilization measured according to restrictive criteria (44.2%; 95% CI 36.8–51.6%) was three times higher than for permissive criteria (12.0%; 95% CI 8.0–16.0%; P<0.001). Overutilization measured using subjective criteria (29.0%; 95% CI 21.9–36.1%) was nearly twice as high as for objective criteria (16.1%; 95% CI 11.0–21.2%; P = 0.004). Together, these factors explained over half (54%) of the overall variability in overutilization. There were no statistically significant differences between studies from the United States vs. elsewhere (P = 0.38) or among chemistry, hematology, microbiology, and molecular tests (P = 0.05–0.65) and no robust statistically significant trends over time. Conclusions: The landscape of overutilization varies systematically by clinical setting (initial vs. repeat), test volume, and measurement criteria. Underutilization is also widespread, but understudied. Expanding the current focus on reducing repeat testing to include ordering the right test during initial evaluation may lead to fewer errors and better care

    Advantages and Limitations of Anticipating Laboratory Test Results from Regression- and Tree-Based Rules Derived from Electronic Health-Record Data

    Get PDF
    Laboratory testing is the single highest-volume medical activity, making it useful to ask how well one can anticipate whether a given test result will be high, low, or within the reference interval (“normal”). We analyzed 10 years of electronic health records—a total of 69.4 million blood tests—to see how well standard rule-mining techniques can anticipate test results based on patient age and gender, recent diagnoses, and recent laboratory test results. We evaluated rules according to their positive and negative predictive value (PPV and NPV) and area under the receiver-operator characteristic curve (ROC AUCs). Using a stringent cutoff of PPV and/or NPV≥0.95, standard techniques yield few rules for sendout tests but several for in-house tests, mostly for repeat laboratory tests that are part of the complete blood count and basic metabolic panel. Most rules were clinically and pathophysiologically plausible, and several seemed clinically useful for informing pre-test probability of a given result. But overall, rules were unlikely to be able to function as a general substitute for actually ordering a test. Improving laboratory utilization will likely require different input data and/or alternative methods

    What we learned in kindergarten: five tips for collaboration in oncology

    Get PDF
    As you read the five tenets presented here, think about these simple truths of leading and influencing others, managing failure, thinking strategically, and resolving conflicts. Ap - ply them to the world in which we all now live and play. Far too often work (the place) is viewed simply as where work (the action) occurs. What we are saying is that, although institutional targets (such as reducing wait times to see new patients) are all laudable goals, there has to be more, and play has to become an essential component of work. What can we uncover, rediscover, and create to make the time spent with one another the best possible experience for everyone involved? Even more importantly, what must we do to ensure that what we create and share has the possibil - ity and potential to make our lives and the world a better place? Play isn’t something we do as a part of life—it is life.info:eu-repo/semantics/publishedVersio
    corecore