289 research outputs found

    Retrieval of Images with Objects of Specific Size, Location, and Spatial Configuration

    Full text link
    An approach to image retrieval using spatial configurations is presented. The goal is to search the database for images that contain similar objects (image-patches) with a given configuration, size and position. The proposed approach consists of creating localized representations robust to segmentation variations, and a sub-graph matching method to compare the query with the database items. Localized object representations are created using a community detection method that groups visually similar segments. Extensive experimental results on three challenging datasets are provided to demonstrate the feasibility of the approach

    Image-based Search and Retrieval for Biface Artefacts using Features Capturing Archaeologically Significant Characteristics

    Get PDF
    Archaeologists are currently producing huge numbers of digitized photographs to record and preserve artefact finds. These images are used to identify and categorize artefacts and reason about connections between artefacts and perform outreach to the public. However, finding specific types of images within collections remains a major challenge. Often, the metadata associated with images is sparse or is inconsistent. This makes keyword-based exploratory search difficult, leaving researchers to rely on serendipity and slowing down the research process. We present an image-based retrieval system that addresses this problem for biface artefacts. In order to identify artefact characteristics that need to be captured by image features, we conducted a contextual inquiry study with experts in bifaces. We then devised several descriptors for matching images of bifaces with similar artefacts. We evaluated the performance of these descriptors using measures that specifically look at the differences between the sets of images returned by the search system using different descriptors. Through this nuanced approach, we have provided a comprehensive analysis of the strengths and weaknesses of the different descriptors and identified implications for design in the search systems for archaeology

    Automated Segmentation and Connectivity Analysis for Normal Pressure Hydrocephalus

    Get PDF
    Objective and Impact Statement. We propose an automated method of predicting Normal Pressure Hydrocephalus (NPH) from CT scans. A deep convolutional network segments regions of interest from the scans. These regions are then combined with MRI information to predict NPH. To our knowledge, this is the first method which automatically predicts NPH from CT scans and incorporates diffusion tractography information for prediction. Introduction. Due to their low cost and high versatility, CT scans are often used in NPH diagnosis. No well-defined and effective protocol currently exists for analysis of CT scans for NPH. Evans' index, an approximation of the ventricle to brain volume using one 2D image slice, has been proposed but is not robust. The proposed approach is an effective way to quantify regions of interest and offers a computational method for predicting NPH. Methods. We propose a novel method to predict NPH by combining regions of interest segmented from CT scans with connectome data to compute features which capture the impact of enlarged ventricles by excluding fiber tracts passing through these regions. The segmentation and network features are used to train a model for NPH prediction. Results. Our method outperforms the current state-of-the-art by 9 precision points and 29 recall points. Our segmentation model outperforms the current state-of-the-art in segmenting the ventricle, gray-white matter, and subarachnoid space in CT scans. Conclusion. Our experimental results demonstrate that fast and accurate volumetric segmentation of CT brain scans can help improve the NPH diagnosis process, and network properties can increase NPH prediction accuracy

    Beyond Spatial Auto-Regressive Models: Predicting Housing Prices with Satellite Imagery

    Full text link
    When modeling geo-spatial data, it is critical to capture spatial correlations for achieving high accuracy. Spatial Auto-Regression (SAR) is a common tool used to model such data, where the spatial contiguity matrix (W) encodes the spatial correlations. However, the efficacy of SAR is limited by two factors. First, it depends on the choice of contiguity matrix, which is typically not learnt from data, but instead, is assumed to be known apriori. Second, it assumes that the observations can be explained by linear models. In this paper, we propose a Convolutional Neural Network (CNN) framework to model geo-spatial data (specifi- cally housing prices), to learn the spatial correlations automatically. We show that neighborhood information embedded in satellite imagery can be leveraged to achieve the desired spatial smoothing. An additional upside of our framework is the relaxation of linear assumption on the data. Specific challenges we tackle while implementing our framework include, (i) how much of the neighborhood is relevant while estimating housing prices? (ii) what is the right approach to capture multiple resolutions of satellite imagery? and (iii) what other data-sources can help improve the estimation of spatial correlations? We demonstrate a marked improvement of 57% on top of the SAR baseline through the use of features from deep neural networks for the cities of London, Birmingham and Liverpool.Comment: 10 pages, 5 figure

    INACT—INDECT Advanced Image Cataloguing Tool

    Full text link

    Utility of multispectral imaging for nuclear classification of routine clinical histopathology imagery

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>We present an analysis of the utility of multispectral versus standard RGB imagery for routine H&E stained histopathology images, in particular for pixel-level classification of nuclei. Our multispectral imagery has 29 spectral bands, spaced 10 nm within the visual range of 420–700 nm. It has been hypothesized that the additional spectral bands contain further information useful for classification as compared to the 3 standard bands of RGB imagery. We present analyses of our data designed to test this hypothesis.</p> <p>Results</p> <p>For classification using all available image bands, we find the best performance (equal tradeoff between detection rate and false alarm rate) is obtained from either the multispectral or our "ccd" RGB imagery, with an overall increase in performance of 0.79% compared to the next best performing image type. For classification using single image bands, the single best multispectral band (in the red portion of the spectrum) gave a performance increase of 0.57%, compared to performance of the single best RGB band (red). Additionally, red bands had the highest coefficients/preference in our classifiers. Principal components analysis of the multispectral imagery indicates only two significant image bands, which is not surprising given the presence of two stains.</p> <p>Conclusion</p> <p>Our results indicate that multispectral imagery for routine H&E stained histopathology provides minimal additional spectral information for a pixel-level nuclear classification task than would standard RGB imagery.</p
    corecore