5,205 research outputs found
Overconfidence in Photometric Redshift Estimation
We describe a new test of photometric redshift performance given a
spectroscopic redshift sample. This test complements the traditional comparison
of redshift {\it differences} by testing whether the probability density
functions have the correct {\it width}. We test two photometric redshift
codes, BPZ and EAZY, on each of two data sets and find that BPZ is consistently
overconfident (the are too narrow) while EAZY produces approximately the
correct level of confidence. We show that this is because EAZY models the
uncertainty in its spectral energy distribution templates, and that post-hoc
smoothing of the BPZ provides a reasonable substitute for detailed
modeling of template uncertainties. Either remedy still leaves a small surplus
of galaxies with spectroscopic redshift very far from the peaks. Thus, better
modeling of low-probability tails will be needed for high-precision work such
as dark energy constraints with the Large Synoptic Survey Telescope and other
large surveys.Comment: accepted to MNRA
Soft Contract Verification
Behavioral software contracts are a widely used mechanism for governing the
flow of values between components. However, run-time monitoring and enforcement
of contracts imposes significant overhead and delays discovery of faulty
components to run-time.
To overcome these issues, we present soft contract verification, which aims
to statically prove either complete or partial contract correctness of
components, written in an untyped, higher-order language with first-class
contracts. Our approach uses higher-order symbolic execution, leveraging
contracts as a source of symbolic values including unknown behavioral values,
and employs an updatable heap of contract invariants to reason about
flow-sensitive facts. We prove the symbolic execution soundly approximates the
dynamic semantics and that verified programs can't be blamed.
The approach is able to analyze first-class contracts, recursive data
structures, unknown functions, and control-flow-sensitive refinements of
values, which are all idiomatic in dynamic languages. It makes effective use of
an off-the-shelf solver to decide problems without heavy encodings. The
approach is competitive with a wide range of existing tools---including type
systems, flow analyzers, and model checkers---on their own benchmarks.Comment: ICFP '14, September 1-6, 2014, Gothenburg, Swede
Size-Change Termination as a Contract
Termination is an important but undecidable program property, which has led
to a large body of work on static methods for conservatively predicting or
enforcing termination. One such method is the size-change termination approach
of Lee, Jones, and Ben-Amram, which operates in two phases: (1) abstract
programs into "size-change graphs," and (2) check these graphs for the
size-change property: the existence of paths that lead to infinite decreasing
sequences.
We transpose these two phases with an operational semantics that accounts for
the run-time enforcement of the size-change property, postponing (or entirely
avoiding) program abstraction. This choice has two key consequences: (1)
size-change termination can be checked at run-time and (2) termination can be
rephrased as a safety property analyzed using existing methods for systematic
abstraction.
We formulate run-time size-change checks as contracts in the style of Findler
and Felleisen. The result compliments existing contracts that enforce partial
correctness specifications to obtain contracts for total correctness. Our
approach combines the robustness of the size-change principle for termination
with the precise information available at run-time. It has tunable overhead and
can check for nontermination without the conservativeness necessary in static
checking. To obtain a sound and computable termination analysis, we apply
existing abstract interpretation techniques directly to the operational
semantics, avoiding the need for custom abstractions for termination. The
resulting analyzer is competitive with with existing, purpose-built analyzers
Localisation of gamma-ray interaction points in thick monolithic CeBr3 and LaBr3:Ce scintillators
Localisation of gamma-ray interaction points in monolithic scintillator
crystals can simplify the design and improve the performance of a future
Compton telescope for gamma-ray astronomy. In this paper we compare the
position resolution of three monolithic scintillators: a 28x28x20 mm3 (length x
breadth x thickness) LaBr3:Ce crystal, a 25x25x20 mm3 CeBr3 crystal and a
25x25x10 mm3 CeBr3 crystal. Each crystal was encapsulated and coupled to an
array of 4x4 silicon photomultipliers through an optical window. The
measurements were conducted using 81 keV and 356 keV gamma-rays from a
collimated 133Ba source. The 3D position reconstruction of interaction points
was performed using artificial neural networks trained with experimental data.
Although the position resolution was significantly better for the thinner
crystal, the 20 mm thick CeBr3 crystal showed an acceptable resolution of about
5.4 mm FWHM for the x and y coordinates, and 7.8 mm FWHM for the z-coordinate
(crystal depth) at 356 keV. These values were obtained from the full position
scans of the crystal sides. The position resolution of the LaBr3:Ce crystal was
found to be considerably worse, presumably due to the highly diffusive optical
in- terface between the crystal and the optical window of the enclosure. The
energy resolution (FWHM) measured for 662 keV gamma-rays was 4.0% for LaBr3:Ce
and 5.5% for CeBr3. The same crystals equipped with a PMT (Hamamatsu R6322-100)
gave an energy resolution of 3.0% and 4.7%, respectively
Integrating methods for determining length-at-age to improve growth estimates for two large scombrids
Fish growth is commonly estimated from length-at-age data
obtained from otoliths. There are several techniques for estimating length-at-age from otoliths including 1) direct observed counts of annual increments; 2) age adjustment based on a categorization of otolith margins; 3) age adjustment based on known periods of spawning and annuli formation; 4) back-calculation to all annuli, and 5) back-calculation to the last annulus only. In this study we
compared growth estimates (von Bertalanffy growth functions) obtained from the above five methods for estimating length-at-age from otoliths for two large scombrids: narrow-barred Spanish mackerel (Scomberomorus
commerson) and broad-barred king mackerel (Scomberomorus semifasciatus). Likelihood ratio tests revealed that the largest differences in growth occurred between the back-calculation methods and the observed and adjusted methods for both species of mackerel. The pattern, however, was
more pronounced for S. commerson than for S. semifasciatus, because of the pronounced effect of gear selectivity
demonstrated for S. commerson. We propose a method of substituting length-at-age data from observed or adjusted methods with back-calculated length-at-age data to provide
more appropriate estimates of population growth than those obtained with the individual methods alone, particularly when faster growing young fish are disproportionately
selected for. Substitution of observed or adjusted length-at-age data with back-calculated length-at-age data provided more realistic estimates of length for younger ages than observed or adjusted methods as well as more
realistic estimates of mean maximum length than those derived from backcalculation methods alone
- …
