45 research outputs found

    Empirical Challenge for NC Theory

    Full text link
    Horn-satisfiability or Horn-SAT is the problem of deciding whether a satisfying assignment exists for a Horn formula, a conjunction of clauses each with at most one positive literal (also known as Horn clauses). It is a well-known P-complete problem, which implies that unless P = NC, it is a hard problem to parallelize. In this paper, we empirically show that, under a known simple random model for generating the Horn formula, the ratio of hard-to-parallelize instances (closer to the worst-case behavior) is infinitesimally small. We show that the depth of a parallel algorithm for Horn-SAT is polylogarithmic on average, for almost all instances, while keeping the work linear. This challenges theoreticians and programmers to look beyond worst-case analysis and come up with practical algorithms coupled with respective performance guarantees.Comment: 10 pages, 5 figures. Accepted at HOPC'2

    A concept for fully automated segmentation of bone in ultrasound imaging

    Get PDF
    This study proposes a novel concept for the automated and computerised segmentation of ultrasound images of bone based on motion information. Force is applied on the heel region using the ultrasound probe and then removed while recording the video of the bone using ultrasound. The interface between the bone and surrounding tissues is the region that moves with maximum speed. This concept is utilised to determine a map of movement, where speed is the criterion used for the bone segmentation from the surrounding tissues. To achieve that, the image is subdivided into regions of uniform sizes, followed by tracking individual regions in the successive frames of the video using an optical flow algorithm. The average movement speed is calculated for the regions. Then, the regions with the higher speed are identified as bone surfaces. It is given as the initial contour for the Chan–Vese algorithm to achieve smoother bone surfaces. Then, the final output from the Chan–Vese is post-processed using a boundary tracing algorithm to get the last automated bone segmented output. The segmented outcomes are compared against the manually segmented images from the experts to determine the accuracy. Bhattacharyya distances are used to calculate the accuracy of the algorithmic and manual output. The quantitative results from Bhattacharyya distances indicated an excellent overlap between algorithmic and manual works (average ± STDEV Bhattacharyya distance: 0.06285 ± 0.002051). The bone-segmented output from the optical flow algorithm is compared with the model output and the texture-based segmentation method’s output. The work from the motion estimation methods has better segmentation accuracy than the model and texture segmentation methods. The results of this study suggest that this method is the first attempt to segment the heel bone from the ultrasound image using motion information

    A concept for movement-based computerized segmentation of connective tissue in ultrasound imaging

    Get PDF
    This study proposes a novel concept for the computerized segmentation of ultrasound images of connective tissue based on movement. Tendons and ligaments are capable of almost frictionless movement relative to their neighbouring tissues making them good candidates for movement-based segmentation. To demonstrate this concept, a central cross section of the patellar tendon was imaged in the axial plane while movement was generated by manually pulling and pushing the skin close to the imaging area. Maps of internal movement were created for four representative pairs of consecutive images using normalised cross corelation. Thresholding followed by a series of morphological operations (k-clustering, blob extraction, curve fitting) enabled the extraction of the superficial-most tendon boundary. Comparison against manually segmented outputs indicated good agreement against ground truth (average ± STDEV Bhattacharyya distance: 0.170 ± 0.039). In contrast to the more superficial parts of the tissue, the applied method for motion generation did not result in clearly visible movement in the tissue areas deeper in the imaging window. The segmentation of the entire tendon will require movement patterns that involve equally the entire tendon (e.g., generated by a contraction of the in-series muscle). The results of this study demonstrate for the first time that movement mapping can be used for the segmentation of connective tissue. Further research will be needed to identify the optimal way to use motion to complement existing segmentation approaches which are based on signal intensity, texture, and shape features.</p

    The Database for Aggregate Analysis of ClinicalTrials.gov (AACT) and Subsequent Regrouping by Clinical Specialty

    Get PDF
    BACKGROUND: The ClinicalTrials.gov registry provides information regarding characteristics of past, current, and planned clinical studies to patients, clinicians, and researchers; in addition, registry data are available for bulk download. However, issues related to data structure, nomenclature, and changes in data collection over time present challenges to the aggregate analysis and interpretation of these data in general and to the analysis of trials according to clinical specialty in particular. Improving usability of these data could enhance the utility of ClinicalTrials.gov as a research resource. METHODS/PRINCIPAL RESULTS: The purpose of our project was twofold. First, we sought to extend the usability of ClinicalTrials.gov for research purposes by developing a database for aggregate analysis of ClinicalTrials.gov (AACT) that contains data from the 96,346 clinical trials registered as of September 27, 2010. Second, we developed and validated a methodology for annotating studies by clinical specialty, using a custom taxonomy employing Medical Subject Heading (MeSH) terms applied by an NLM algorithm, as well as MeSH terms and other disease condition terms provided by study sponsors. Clinical specialists reviewed and annotated MeSH and non-MeSH disease condition terms, and an algorithm was created to classify studies into clinical specialties based on both MeSH and non-MeSH annotations. False positives and false negatives were evaluated by comparing algorithmic classification with manual classification for three specialties. CONCLUSIONS/SIGNIFICANCE: The resulting AACT database features study design attributes parsed into discrete fields, integrated metadata, and an integrated MeSH thesaurus, and is available for download as Oracle extracts (.dmp file and text format). This publicly-accessible dataset will facilitate analysis of studies and permit detailed characterization and analysis of the U.S. clinical trials enterprise as a whole. In addition, the methodology we present for creating specialty datasets may facilitate other efforts to analyze studies by specialty groups

    Biochar as a strategy to manage stem rot disease of groundnut incited by Sclerotium rolfsii

    Get PDF
    Due to the pathogen’s ability to survive in the soil for longer durations, soil-borne diseases are often difficult to control. This study investigates the multifaceted impacts of biochar on the management of stem rot disease in groundnut and its influence on soil properties and microbial communities. The effects of biochar at different concentrations, such as 0%, 1%, 3%, and 5% on groundnut stem rot disease incited by Sclerotium rolfsii were evaluated thoroughly. Under laboratory conditions, biochar exhibited no direct inhibitory effects on S. rolfsii at varying concentrations but revealed an indirect suppression of sclerotial body production, suggesting a concentration-dependent influence on pathogen resting structures. Further, it was observed that biochar treatments effectively delayed symptom onset and reduced disease progression in groundnut plants, with significant variation observed among genotypes and biochar concentrations. Notably, interactions involving genotypes ICGV 171002 and ICGV 181035 with BC2 + Sr (3% conc. of biochar + S. rolfsii) and BC3 + Sr (5% conc. of biochar + S. rolfsii) treatments showed superior efficacy in disease reduction under controlled conditions. Field evaluations confirmed these findings, highlighting genotype-specific responses to biochar treatments. However, no significant difference was observed between BC2 + Sr (3%) and BC3 + Sr (5%) treatments in managing stem rot disease compared to controls. Biochar application significantly increased soil nutrient levels, including nitrogen, phosphorus, and potassium, and increased soil organic matter content, EC, pH, emphasizing its potential to improve soil fertility. Overall, these findings highlight the potential benefits of biochar for sustainable agriculture through disease management, soil nutrient enrichment, and microbial modulation, warranting further investigation into optimal application strategies across different agricultural contexts

    AI recognition of patient race in medical imaging: a modelling study

    Get PDF
    Background Previous studies in medical imaging have shown disparate abilities of artificial intelligence (AI) to detect a person's race, yet there is no known correlation for race on medical imaging that would be obvious to human experts when interpreting the images. We aimed to conduct a comprehensive evaluation of the ability of AI to recognise a patient's racial identity from medical images. Methods Using private (Emory CXR, Emory Chest CT, Emory Cervical Spine, and Emory Mammogram) and public (MIMIC-CXR, CheXpert, National Lung Cancer Screening Trial, RSNA Pulmonary Embolism CT, and Digital Hand Atlas) datasets, we evaluated, first, performance quantification of deep learning models in detecting race from medical images, including the ability of these models to generalise to external environments and across multiple imaging modalities. Second, we assessed possible confounding of anatomic and phenotypic population features by assessing the ability of these hypothesised confounders to detect race in isolation using regression models, and by re-evaluating the deep learning models by testing them on datasets stratified by these hypothesised confounding variables. Last, by exploring the effect of image corruptions on model performance, we investigated the underlying mechanism by which AI models can recognise race. Findings In our study, we show that standard AI deep learning models can be trained to predict race from medical images with high performance across multiple imaging modalities, which was sustained under external validation conditions (x-ray imaging [area under the receiver operating characteristics curve (AUC) range 0·91-0·99], CT chest imaging [0·87-0·96], and mammography [0·81]). We also showed that this detection is not due to proxies or imaging-related surrogate covariates for race (eg, performance of possible confounders: body-mass index [AUC 0·55], disease distribution [0·61], and breast density [0·61]). Finally, we provide evidence to show that the ability of AI deep learning models persisted over all anatomical regions and frequency spectrums of the images, suggesting the efforts to control this behaviour when it is undesirable will be challenging and demand further study. Interpretation The results from our study emphasise that the ability of AI deep learning models to predict self-reported race is itself not the issue of importance. However, our finding that AI can accurately predict self-reported race, even from corrupted, cropped, and noised medical images, often when clinical experts cannot, creates an enormous risk for all model deployments in medical imaging. Funding National Institute of Biomedical Imaging and Bioengineering, MIDRC grant of National Institutes of Health, US National Science Foundation, National Library of Medicine of the National Institutes of Health, and Taiwan Ministry of Science and Technology

    Reading Race: AI Recognises Patient's Racial Identity In Medical Images

    Get PDF
    Background: In medical imaging, prior studies have demonstrated disparate AI performance by race, yet there is no known correlation for race on medical imaging that would be obvious to the human expert interpreting the images. Methods: Using private and public datasets we evaluate: A) performance quantification of deep learning models to detect race from medical images, including the ability of these models to generalize to external environments and across multiple imaging modalities, B) assessment of possible confounding anatomic and phenotype population features, such as disease distribution and body habitus as predictors of race, and C) investigation into the underlying mechanism by which AI models can recognize race. Findings: Standard deep learning models can be trained to predict race from medical images with high performance across multiple imaging modalities. Our findings hold under external validation conditions, as well as when models are optimized to perform clinically motivated tasks. We demonstrate this detection is not due to trivial proxies or imaging-related surrogate covariates for race, such as underlying disease distribution. Finally, we show that performance persists over all anatomical regions and frequency spectrum of the images suggesting that mitigation efforts will be challenging and demand further study. Interpretation: We emphasize that model ability to predict self-reported race is itself not the issue of importance. However, our findings that AI can trivially predict self-reported race -- even from corrupted, cropped, and noised medical images -- in a setting where clinical experts cannot, creates an enormous risk for all model deployments in medical imaging: if an AI model secretly used its knowledge of self-reported race to misclassify all Black patients, radiologists would not be able to tell using the same data the model has access to

    Universalizing Complete Access to Finance: Key Conceptual Issues

    Full text link
    In this paper, we present two stylized models of the financial system. We make the case that in order to realize the potential of a well-functioning complete financial market, financial system designers and financial service providers will need to think about ways to deliver financial propositions that are customized to individual households by responding to their unique circumstances. This will entail the presence of proximate, well-trained providers that intermediate between the customer and those large product manufacturers whose goal is financial well being and not merely product sales. These providers would need to use expertise in financial advice or wealth management to develop integrated financial propositions for clients. We also highlight some of the important debates that arise in making this stylized financial system a reality
    corecore