1,222 research outputs found

    Deep Eyes: Binocular Depth-from-Focus on Focal Stack Pairs

    Full text link
    Human visual system relies on both binocular stereo cues and monocular focusness cues to gain effective 3D perception. In computer vision, the two problems are traditionally solved in separate tracks. In this paper, we present a unified learning-based technique that simultaneously uses both types of cues for depth inference. Specifically, we use a pair of focal stacks as input to emulate human perception. We first construct a comprehensive focal stack training dataset synthesized by depth-guided light field rendering. We then construct three individual networks: a Focus-Net to extract depth from a single focal stack, a EDoF-Net to obtain the extended depth of field (EDoF) image from the focal stack, and a Stereo-Net to conduct stereo matching. We show how to integrate them into a unified BDfF-Net to obtain high-quality depth maps. Comprehensive experiments show that our approach outperforms the state-of-the-art in both accuracy and speed and effectively emulates human vision systems

    Deep Bilateral Learning for Real-Time Image Enhancement

    Get PDF
    Performance is a critical challenge in mobile image processing. Given a reference imaging pipeline, or even human-adjusted pairs of images, we seek to reproduce the enhancements and enable real-time evaluation. For this, we introduce a new neural network architecture inspired by bilateral grid processing and local affine color transforms. Using pairs of input/output images, we train a convolutional neural network to predict the coefficients of a locally-affine model in bilateral space. Our architecture learns to make local, global, and content-dependent decisions to approximate the desired image transformation. At runtime, the neural network consumes a low-resolution version of the input image, produces a set of affine transformations in bilateral space, upsamples those transformations in an edge-preserving fashion using a new slicing node, and then applies those upsampled transformations to the full-resolution image. Our algorithm processes high-resolution images on a smartphone in milliseconds, provides a real-time viewfinder at 1080p resolution, and matches the quality of state-of-the-art approximation techniques on a large class of image operators. Unlike previous work, our model is trained off-line from data and therefore does not require access to the original operator at runtime. This allows our model to learn complex, scene-dependent transformations for which no reference implementation is available, such as the photographic edits of a human retoucher.Comment: 12 pages, 14 figures, Siggraph 201

    Fast Local Laplacian Filters: Theory and Applications

    Get PDF
    International audienceMulti-scale manipulations are central to image editing but they are also prone to halos. Achieving artifact-free results requires sophisticated edge- aware techniques and careful parameter tuning. These shortcomings were recently addressed by the local Laplacian filters, which can achieve a broad range of effects using standard Laplacian pyramids. However, these filters are slow to evaluate and their relationship to other approaches is unclear. In this paper, we show that they are closely related to anisotropic diffusion and to bilateral filtering. Our study also leads to a variant of the bilateral filter that produces cleaner edges while retaining its speed. Building upon this result, we describe an acceleration scheme for local Laplacian filters on gray-scale images that yields speed-ups on the order of 50×. Finally, we demonstrate how to use local Laplacian filters to alter the distribution of gradients in an image. We illustrate this property with a robust algorithm for photographic style transfer

    Test of isospin symmetry via low energy 1^1H(π\pi^-,πo\pi^o)nn charge exchange

    Full text link
    We report measurements of the πpπon\pi^- p \to \pi^o n differential cross sections at six momenta (104-143 MeV/c) and four angles (0-40 deg) by detection of γ\gamma-ray pairs from πoγγ\pi^o \to \gamma \gamma decays using the TRIUMF RMC spectrometer. This region exhibits a vanishing zero-degree cross section from destructive interference between s-- and p--waves, thus yielding special sensitivity to pion-nucleon dynamics and isospin symmetry breaking. Our data and previous data do not agree, with important implications for earlier claims of large isospin violating effects in low energy pion-nucleon interactions.Comment: 5 pages, 3 figures, submitted to Physical Review Letter

    Fast and Robust Pyramid-based Image Processing

    Get PDF
    Multi-scale manipulations are central to image editing but they are also prone to halos. Achieving artifact-free results requires sophisticated edgeaware techniques and careful parameter tuning. These shortcomings were recently addressed by the local Laplacian filters, which can achieve a broad range of effects using standard Laplacian pyramids. However, these filters are slow to evaluate and their relationship to other approaches is unclear. In this paper, we show that they are closely related to anisotropic diffusion and to bilateral filtering. Our study also leads to a variant of the bilateral filter that produces cleaner edges while retaining its speed. Building upon this result, we describe an acceleration scheme for local Laplacian filters that yields speed-ups on the order of 50x. Finally, we demonstrate how to use local Laplacian filters to alter the distribution of gradients in an image. We illustrate this property with a robust algorithm for photographic style transfer

    Search-and-replace editing for personal photo collections

    Get PDF
    We propose a new system for editing personal photo collections, inspired by search-and-replace editing for text. In our system, local edits specified by the user in a single photo (e.g., using the “clone brush” tool) can be propagated automatically to other photos in the same collection, by matching the edited region across photos. To achieve this, we build on tools from computer vision for image matching. Our experimental results on real photo collections demonstrate the feasibility and potential benefits of our approach.Natural Sciences and Engineering Research Council of Canada Postdoctoral FellowshipMassachusetts Institute of Technology. Undergraduate Research Opportunities ProgramNational Science Foundation (U.S.) (CAREER award 0447561)T-Party ProjectUnited States. National Geospatial-Intelligence Agency (NGA NEGI-1582- 04-0004)United States. Office of Naval Research. Multidisciplinary University Research Initiative (Grant N00014-06-1-0734)Microsoft ResearchAlfred P. Sloan Foundatio

    Diffuse reflectance imaging with astronomical applications

    Get PDF
    Diffuse objects generally tell us little about the surrounding lighting, since the radiance they reflect blurs together incident lighting from many directions. In this paper we discuss how occlusion geometry can help invert diffuse reflectance to recover lighting or surface albedo. Self-occlusion in the scene can be regarded as a form of coding, creating high frequencies that improve the conditioning of diffuse light transport. Our analysis builds on a basic observation that diffuse reflectors with sufficiently detailed geometry can fully resolve the incident lighting. Using a Bayesian framework, we propose a novel reconstruction method based on high-resolution photography, taking advantage of visibility changes near occlusion boundaries. We also explore the limits of single-pixel observations as the diffuse reflector (and potentially the lighting) vary over time. Diffuse reflectance imaging is particularly relevant for astronomy applications, where diffuse reflectors arise naturally but the incident lighting and camera position cannot be controlled. To test our approaches, we first study the feasibility of using the moon as a diffuse reflector to observe the earth as seen from space. Next we present a reconstruction of Mars using historical photometry measurements not previously used for this purpose. As our results suggest, diffuse reflectance imaging expands our notion of what can qualify as a camera.Natural Sciences and Engineering Research Council of Canada (NSERC) (Postdoctoral Fellowship)United States-Israel Binational Science Foundation (Grant 2008155)United States. National Geospatial-Intelligence Agency (NEGI-1582-04-0004)United States. Multidisciplinary University Research Initiative (Grant N00014-06-1-0734

    Longitudinal muon spin relaxation in high purity aluminum and silver

    Full text link
    The time dependence of muon spin relaxation has been measured in high purity aluminum and silver samples in a longitudinal 2 T magnetic field at room temperature, using time-differential \musr. For times greater than 10 ns, the shape fits well to a single exponential with relaxation rates of \lambda_{\textrm{Al}} = 1.3 \pm 0.2\,(\textrm{stat.}) \pm 0.3\,(\textrm{syst.})\,\pms and \lambda_{\textrm{Ag}} = 1.0 \pm 0.2\,(\textrm{stat.}) \pm 0.2\,(\textrm{syst.})\,\pms

    In-Medium Chiral Perturbation Theory and Pion Weak Decay in the Presence of Background Matter

    Get PDF
    Two-point functions related to the pion weak decay constant f_\pi are calculated from the generating functional of chiral perturbation theory in the mean-field approximation and the heavy-baryon limit. The aim is to demonstrate that Lorentz invariance is violated in the presence of background matter. This fact manifests itself in the splitting of both f_\pi and the pion mass into uncorrelated time- and spacelike parts. We emphasize the different in-medium renormalizations of the correlation functions, show the inequivalence between the in-medium values of f_\pi deduced from Walecka-type models, on the one hand, and QCD sum rules, on the other hand, and elaborate on the importance for some nuclear physics observables.Comment: 14 pages, RevTex, no figures, to appear in Nucl.Phys.
    corecore