373 research outputs found
Recommended from our members
Learning Marginalization through Regression for Hand Orientation Inference
We present a novel marginalization method for multilayered Random Forest based hand orientation regression. The proposed model is composed of two layers, where the first layer consists of a marginalization weights regressor while the second layer contains expert regressors trained on subsets of our hand orientation dataset. We use a latent variable space to divide our dataset into subsets. Each expert regressor gives a posterior probability for assigning a given latent variable to the input data. Our main contribution comes from the regression based marginalization of these posterior probabilities. We use a Kullback-Leibler divergence based optimization for estimating the weights that are used to train our marginalization weights regressor. In comparison to the state-of-the-art of both hand orientation inference and multi-layered Random Forest marginalization, our proposed method proves to be more robust
Recommended from our members
Staged Probabilistic Regression for Hand Orientation Inference
Learning the global hand orientation from 2D monocular images is a challenging task, as the projected hand shape is affected by a number of variations. These include inter-person hand shape and size variations, intra-person pose and style variations and self-occlusion due to varying hand orientation. Given a hand orientation dataset containing these variations, a single regressor proves to be limited for learning the mapping of hand silhouette images onto the orientation angles. We address this by proposing a staged probabilistic regressor (SPORE) which consists of multiple expert regressors, each one learning a subset of variations from the dataset. Inspired by Boosting, the novelty of our method comes from the staged probabilistic learning, where each stage consists of training and adding an expert regressor to the intermediate ensemble of expert regressors. Unlike Boosting, we marginalize the posterior prediction probabilities from each expert regressor by learning a marginalization weights regressor, where the weights are extracted during training using a KullbackLeibler divergence-based optimization. We extend and evaluate our proposed framework for inferring hand orientation and pose simultaneously. In comparison to the state-of-the-art of hand orientation inference, multi-layered Random Forest marginalization and Boosting, our proposed method proves to be more accurate. Moreover, experimental results reveal that simultaneously learning hand orientation and pose from 2D monocular images significantly improves the pose classification performance
Recommended from our members
Probabilistic Spatial Regression using a Deep Fully Convolutional Neural Network
Probabilistic predictions are often preferred in computer vision problems because they can provide a confidence of the predicted value. The recent dominant model for computer vision problems, the convolutional neural network, produces probabilistic output for classification and segmentation problems. But probabilistic regression using neural networks is not well defined. In this work, we present a novel fully convolutional neural network capable of producing a spatial probabilistic distribution for localizing image landmarks. We have introduced a new network layer and a novel loss function for the network to produce a two-dimensional probability map. The proposed network has been used in a novel framework to localize vertebral corners for lateral cervical Xray images. The framework has been evaluated on a dataset of 172 images consisting 797 vertebrae and 3,188 vertebral corners. The proposed framework has demonstrated promising performance in localizing vertebral corners, with a relative improvement of 38% over the previous state-of-the-art
Recommended from our members
Automatic Segmentation of Polyps in Colonoscopic Narrow-Band Imaging Data
Colorectal cancer is the third most common type of cancer worldwide. However, this disease can be prevented by detection and removal of precursor adenomatous polyps during optical colonoscopy (OC). During OC, the endoscopist looks for colon polyps. While hyperplastic polyps are benign lesions, adenomatous polyps are likely to become cancerous. Hence, it is a common practice to remove all identified polyps and send them to subsequent histological analysis. But removal of hyperplastic polyps poses unnecessary risk to patients and incurs unnecessary costs for histological analysis. In this paper, we develop the first part of a novel optical biopsy application based on narrow-band imaging (NBI). A barrier to an automatic system is that polyp classification algorithms require manual segmentations of the polyps, so we automatically segment polyps in colonoscopic NBI data. We propose an algorithm, Shape-UCM, which is an extension of the gPb-OWT-UCM algorithm, a state-of-the-art algorithm for boundary detection and segmentation. Shape-UCM solves the intrinsic scale selection problem of gPb-OWT-UCM by including prior knowledge about the shape of the polyps. Shape-UCM outperforms previous methods with a specificity of 92%, a sensitivity of 71%, and an accuracy of 88% for automatic segmentation of a test set of 87 images
Recommended from our members
Image-Based Photo Hulls
We present an efficient image-based rendering algorithm that computes photo hulls of a scene photographed from multiple viewpoints. Our algorithm, called image-based photo hulls (IBPH), like the image-based visual hulls (IBVH) algorithm from Matusik et. al. (2000) on which it is based, takes advantage of epipolar geometry to efficiently reconstruct the geometry and visibility of a scene. Our IBPH algorithm differs from IBVH in that it utilizes the color information of the images to identify the scene geometry. These additional color constraints often result in a more accurately reconstructed geometry, which projects to better synthesized virtual views of the scene. We demonstrate our algorithm running in a real-time 3D telepresence application using video data acquired from four viewpoints
Recommended from our members
Global Localization and Orientation of the Cervical Spine in X-ray Imaging
Injuries in cervical spine X-ray images are often missed by emergency physicians. Many of these missing injuries cause further complications. Automated analysis of the images has the potential to reduce the chance of missing injuries. Towards this goal, this paper proposes an automatic localization of the spinal column in cervical spine X-ray images. The framework employs a random classification forest algorithm with a kernel density estimation-based voting accumulation method to localize the spinal column and to detect the orientation. The algorithm has been evaluated with 90 emergency room X-ray images and has achieved an average detection accuracy of 91% and an orientation error of 3.6◦. The framework can be used to narrow the search area for other advanced injury detection systems
Recommended from our members
Ultrasound-Specific Segmentation via Decorrelation and Statistical Region-Based Active Contours
Segmentation of ultrasound images is often a very challenging task due to speckle noise that contaminates the image. It is well known that speckle noise exhibits an asymmetric distribution as well as significant spatial correlation. Since these attributes can be difficult to model, many previous ultrasound segmentation methods oversimplify the problem by assuming that the noise is white and/or Gaussian, resulting in generic approaches that are actually more suitable to MR and X-ray segmentation than ultrasound. Unlike these methods, in this paper we present an ultrasound-specific segmentation approach that first decorrelates the image, and then performs segmentation on the whitened result using statistical region-based active contours. In particular, we design a gradient ascent flow that evolves the active contours to maximize a log likelihood functional based on the Fisher-Tippett distribution. We present experimental results that demonstrate the effectiveness of our method
Recommended from our members
Supervised Partial Volume Effect Unmixing for Brain Tumor Characterization using Multi-voxel MR Spectroscopic Imaging
A major challenge faced by multi-voxel Magnetic Resonance Spectroscopy (MV-MRS) imaging is partial volume effect (PVE), where signals from two or more tissue types may be mixed within a voxel. This problem arises due to the low resolution data acquisition, where the size of a voxel is kept relatively large to improve the signal to noise ratio. We propose a novel supervised Signal Mixture Model (SMM), which characterizes the MV-MRS signal into normal, low grade (infiltrative) and high grade (necrotic) brain tissue types, while accounting for in-type variation. An optimization problem is solved based on differential equations, to unmix the tissue by estimating mixture coefficients corresponding to each tissue type at each voxel. This enables visualization of probability heatmaps, useful for characterizing heterogeneous tumors. Experimental results show an overall accuracy of 91.67% and 88.89% for classifying tumors into either low or high grade against histopathology, and demonstrate the method's potential for non-invasive computer-aided diagnosis
Recommended from our members
Generating a 3d hand model from frontal color and range scans
Realistic 3D modeling of human hand anatomy has a number of important applications, including real-time tracking, pose estimation, and human-computer interaction. However the use of RGB-D sensors to accurately capture the full 3D shape of a hand is limited by self-occlusions, relatively smaller size of the hand and the requirement to capture multiple images. In this paper, we propose a method for generating a detailed, realistic hand model from a single frontal range scan and registered color image. In essence, our method converts this 2.5D data into a fully 3D model. The proposed approach extracts joint locations from the color image using a fingertip and interfinger region detector with a Naive Bayes probabilistic model. Direct correspondence between these joint locations in the range scan and a synthetic hand model are used to perform rigid registration, followed by a thin-plate-spline deformation that non-rigidly registers a synthetic model. This reconstructed model maintains similar geometric properties as the range scan, but also includes the back side of the hand. Experimental results demonstrate the promise of the method to produce detailed and realistic 3D hand geometry
Recommended from our members
Hough Forest-based Corner Detection for Cervical Spine Radiographs
The cervical spine (neck region) is highly sensitive to trauma related injuries, which must be analysed carefully by emergency physicians. In this work, we propose a Hough Forest-based corner detection method for cervical spine radiographs, as a first step towards a computer-aided diagnostic tool. We propose a novel patch-based model based on two-stage supervised learning (classification and regression) to estimate the corners of cervical vertebral bodies. Our method is evaluated using 106 cervical x-ray images consisting of 530 vertebrae and 2120 corners, which have been demarcated manually by an expert radiographer. The results show promising performance of the proposed algorithm, with a lowest median error of 1.98 m
- …
