4,425 research outputs found
Comment: Demystifying Double Robustness: A Comparison of Alternative Strategies for Estimating a Population Mean from Incomplete Data
Comment on ``Demystifying Double Robustness: A Comparison of Alternative
Strategies for Estimating a Population Mean from Incomplete Data''
[arXiv:0804.2958]Comment: Published in at http://dx.doi.org/10.1214/07-STS227C the Statistical
Science (http://www.imstat.org/sts/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Learning to Generate Images with Perceptual Similarity Metrics
Deep networks are increasingly being applied to problems involving image
synthesis, e.g., generating images from textual descriptions and reconstructing
an input image from a compact representation. Supervised training of
image-synthesis networks typically uses a pixel-wise loss (PL) to indicate the
mismatch between a generated image and its corresponding target image. We
propose instead to use a loss function that is better calibrated to human
perceptual judgments of image quality: the multiscale structural-similarity
score (MS-SSIM). Because MS-SSIM is differentiable, it is easily incorporated
into gradient-descent learning. We compare the consequences of using MS-SSIM
versus PL loss on training deterministic and stochastic autoencoders. For three
different architectures, we collected human judgments of the quality of image
reconstructions. Observers reliably prefer images synthesized by
MS-SSIM-optimized models over those synthesized by PL-optimized models, for two
distinct PL measures ( and distances). We also explore the
effect of training objective on image encoding and analyze conditions under
which perceptually-optimized representations yield better performance on image
classification. Finally, we demonstrate the superiority of
perceptually-optimized networks for super-resolution imaging. Just as computer
vision has advanced through the use of convolutional architectures that mimic
the structure of the mammalian visual system, we argue that significant
additional advances can be made in modeling images through the use of training
objectives that are well aligned to characteristics of human perception
Loss Function Based Ranking in Two-Stage, Hierarchical Models
Several authors have studied the performance of optimal, squared error loss (SEL) estimated ranks. Though these are effective, in many applications interest focuses on identifying the relatively good (e.g., in the upper 10%) or relatively poor performers. We construct loss functions that address this goal and evaluate candidate rank estimates, some of which optimize specific loss functions. We study performance for a fully parametric hierarchical model with a Gaussian prior and Gaussian sampling distributions, evaluating performance for several loss functions. Results show that though SEL-optimal ranks and percentiles do not specifically focus on classifying with respect to a percentile cut point, they perform very well over a broad range of loss functions. We compare inferences produced by the candidate estimates using data from The Community Tracking Study
- …
