3,082 research outputs found

    Magnetization Profile in the d=2 Semi-Infinite Ising Model and Crossover between Ordinary and Normal Transition

    Full text link
    We theoretically investigate the spatial dependence of the order parameter of the two-dimensional semi-infinite Ising model with a free surface at or above the bulk critical temperature. Special attention is paid to the influence of a surface magnetic field h1h_1 and the crossover between the fixed points at h_1=0 and h_1=infinity. The sharp increase of the magnetization m(z) close to the boundary generated by a small h_1, which was found previously by the present authors in the three-dimensional model, is also seen in two dimensions. There, however, the universal short-distance power law is modified by a logarithm. By means of a phenomenological scaling analysis, the short-distance behavior can be related to the logarithmic dependence of the surface magnetization on h_1. Our results, which are corroborated by Monte Carlo simulations, provide a deeper understanding of the existing exact results concerning the local magnetization and relate the short-distance phenomena in two dimensions to those in higher dimensionality.Comment: 20 pages, 9 figures, submitted to Phys. Rev.

    Total Denoising: Unsupervised Learning of 3D Point Cloud Cleaning

    Get PDF
    We show that denoising of 3D point clouds can be learned unsupervised, directly from noisy 3D point cloud data only. This is achieved by extending recent ideas from learning of unsupervised image denoisers to unstructured 3D point clouds. Unsupervised image denoisers operate under the assumption that a noisy pixel observation is a random realization of a distribution around a clean pixel value, which allows appropriate learning on this distribution to eventually converge to the correct value. Regrettably, this assumption is not valid for unstructured points: 3D point clouds are subject to total noise, i. e., deviations in all coordinates, with no reliable pixel grid. Thus, an observation can be the realization of an entire manifold of clean 3D points, which makes a na\"ive extension of unsupervised image denoisers to 3D point clouds impractical. Overcoming this, we introduce a spatial prior term, that steers converges to the unique closest out of the many possible modes on a manifold. Our results demonstrate unsupervised denoising performance similar to that of supervised learning with clean data when given enough training examples - whereby we do not need any pairs of noisy and clean training data.Comment: Proceedings of ICCV 201

    Neural View-Interpolation for Sparse Light Field Video

    No full text
    We suggest representing light field (LF) videos as "one-off" neural networks (NN), i.e., a learned mapping from view-plus-time coordinates to high-resolution color values, trained on sparse views. Initially, this sounds like a bad idea for three main reasons: First, a NN LF will likely have less quality than a same-sized pixel basis representation. Second, only few training data, e.g., 9 exemplars per frame are available for sparse LF videos. Third, there is no generalization across LFs, but across view and time instead. Consequently, a network needs to be trained for each LF video. Surprisingly, these problems can turn into substantial advantages: Other than the linear pixel basis, a NN has to come up with a compact, non-linear i.e., more intelligent, explanation of color, conditioned on the sparse view and time coordinates. As observed for many NN however, this representation now is interpolatable: if the image output for sparse view coordinates is plausible, it is for all intermediate, continuous coordinates as well. Our specific network architecture involves a differentiable occlusion-aware warping step, which leads to a compact set of trainable parameters and consequently fast learning and fast execution
    corecore