609 research outputs found
De Gustibus non est Taxandum: Heterogeneity in Preferences and Optimal Redistribution
The prominent but unproven intuition that preference heterogeneity reduces re-distribution in a standard optimal tax model is shown to hold under the plausible condition that the distribution of preferences for consumption relative to leisure rises, in terms of first-order stochastic dominance, with income. Given mainstream functional form assumptions on utility and the distributions of ability and preferences, a simple statistic for the effect of preference heterogeneity on marginal tax rates is derived. Numerical simulations and suggestive empirical evidence demonstrate the link between this potentially measurable statistic and the quantitative implications of preference heterogeneity for policy.
Recommended from our members
Positive and Normative Judgments Implicit in U.S. Tax Policy, and the Costs of Unequal Growth and Recessions
Calculating the welfare implications of changes to economic policy or shocks to the economy requires economists to decide on a normative criterion. One way to make that decision is to elicit the relevant moral criteria from real-world policy choices, converting a normative decision into a positive inference exercise as in, for example, the recent surge of so-called “inverse-optimum” research. We find that capitalizing on the potential of this approach is not as straightforward as we might hope. We perform the inverse- optimum inference on U.S. tax policy from 1979 through 2010 and identify two broad explanations for its evolution. These explanations, however, either undermine the reliability of the inference exercise's conclusions or challenge conventional assumptions upon which economists routinely rely when performing welfare evaluations. We emphasize the need for better evidence on society's positive and normative judgments in order to resolve the questions these findings raise
Eine qualitative Untersuchung der Generalisierungsverhaltens von CNNs zur Instrumentenerkennung
Künstliche neuronale Netze (ANNs) haben sich im Bereich des maschinellen Lernens für Audiodaten als erfolgreichstes Werkzeug mit hoher Klassifikationsrate etabliert [1]. Ein bedeutender Nachteil besteht aus wissenschaftlicher Sicht jedoch in der schweren Interpretierbarkeit des von ANNs tatsächlich gelernten Inhalts [2, 3]. Um dieses Problem anzugehen untersuchen wir in dieser Arbeit den Lern- und Generalisierungsprozess eines Convolutional Neural Networks (CNNs) für Multi-Label Instrumentenerkennung in den Hidden Layers des Netzwerks. Wir betrachten die unterschiedlichen Aktivierungen aller Layers durch unterschiedliche Instrumentenklassen um nachzuvollziehen, ab welcher Tiefe das Netzwerk in der Lage ist, zwei von der gleichen Klasse stammenden Stimuli als ähnlich zu erkennen. Wir wiederholen das Experiment mit den gleichen Stimuli für ein auf die Erkennung von vier Emotionen trainiertes CNNs. Dabei bestätigen sich einerseits viele unserer Betrachtungen zum Generalisierungsprozess, gleichzeitig lassen die Ergebnisse darauf schließen, dass das auf Emotionserkennung trainierte Netzwerk in der Lage ist, instrumententypische Patterns zu lernen
Fully differential QCD corrections to single top quark final states
A new next-to-leading order Monte Carlo program for calculation of fully
differential single top quark final states is described and first results
presented. Both the s- and t-channel contributions are included.Comment: 3 pages, 3 figures, talk presented at DPF2000, August 9-12, 2000. To
appear in International Journal of Modern Physics
One-loop N-point equivalence among negative-dimensional, Mellin-Barnes and Feynman parametrization approaches to Feynman integrals
We show that at one-loop order, negative-dimensional, Mellin-Barnes' (MB) and
Feynman parametrization (FP) approaches to Feynman loop integrals calculations
are equivalent. Starting with a generating functional, for two and then for
-point scalar integrals we show how to reobtain MB results, using
negative-dimensional and FP techniques. The point result is valid for
different masses, arbitrary exponents of propagators and dimension.Comment: 11 pages, LaTeX. To be published in J.Phys.
Recommended from our members
An Exploration of Optimal Stabilization Policy
This paper examines the optimal response of monetary and fiscal policy to a decline in aggregate demand. The theoretical framework is a two-period general equilibrium model in which prices are sticky in the short run and flexible in the long run. Policy is evaluated by how well it raises the welfare of the representative household. Although the model has Keynesian features, its policy prescriptions differ significantly from those of textbook Keynesian analysis. Moreover, the model suggests that the commonly used “bang for the buck” calculations are potentially misleading guides for the welfare effects of alternative fiscal policies.Economic
SUSY Ward identities for multi-gluon helicity amplitudes with massive quarks
We use supersymmetric Ward identities to relate multi-gluon helicity
amplitudes involving a pair of massive quarks to amplitudes with massive
scalars. This allows to use the recent results for scalar amplitudes with an
arbitrary number of gluons obtained by on-shell recursion relations to obtain
scattering amplitudes involving top quarks.Comment: 22 pages, references adde
The Pagami Creek smoke plume after long-range transport to the upper troposphere over Europe – aerosol properties and black carbon mixing state
During the CONCERT 2011 field experiment with the DLR research aircraft
Falcon, an enhanced aerosol layer with particle linear depolarization ratios
of 6–8% at 532 nm was observed at altitudes above 10 km over
northeast Germany on 16 September 2011. Dispersion simulations with HYSPILT
suggest that the elevated aerosol layer originated from the Pagami Creek
forest fire in Minnesota, USA, which caused pyro-convective uplift of
particles and gases. The 3–4 day-old smoke plume had high total refractory
black carbon (rBC) mass concentrations of 0.03–0.35 μg m<sup>−3</sup>
at standard temperature and pressure (STP) with rBC mass equivalent diameter
predominantly smaller than 130 nm. Assuming a core-shell particle structure,
the BC cores exhibit very thick (median: 105–136 nm) BC-free coatings. A
large fraction of the BC-containing particles disintegrated into a BC-free
fragment and a BC fragment while passing through the laser beam of the Single
Particle Soot Photometer (SP2). In this study, the disintegration is a result
of very thick coatings around the BC cores. This is in contrast to a previous
study in a forest-fire plume, where it was hypothesized to be a result of BC
cores being attached to a BC-free particle. For the high-altitude forest-fire
aerosol layer observed in this study, increased mass specific
light-absorption cross sections of BC can be expected due to the very thick
coatings around the BC cores, while this would not be the case for the
attached-type morphology. We estimate the BC mass import from the Pagami
Creek forest fire into the upper troposphere/lower stratosphere (UTLS) region
(best estimate: 25 Mg rBC). A comparison to black carbon emission rates from
aviation underlines the importance of pyro-convection on the BC load in the
UTLS region. Our study provides detailed information on the microphysics and
the mixing state of BC in the forest-fire aerosol layer in the upper
troposphere that can be used to better understand and investigate the
radiative impact of such upper tropospheric aerosol layers
Measurement of the strong coupling alpha_S from the three-jet rate in e+e- - annihilation using JADE data
We present a measurement of the strong coupling alpha_S using the three-jet
rate measured with the Durham algorithm in e+e- -annihilation using data of the
JADE experiment at centre-of-mass energies between 14 and 44 GeV. Recent
theoretical improvements provide predictions of the three-jet rate in e+e-
-annihilation at next-to-next-to-leading order. In this paper a measurement of
the three-jet rate is used to determine the strong coupling alpha_s from a
comparison to next-to-next-to-leading order predictions matched with
next-to-leading logarithmic approximations and yields a value for the strong
coupling alpha_S(MZ) = 0.1199+- 0.0010 (stat.) +- 0.0021 (exp.) +- 0.0054
(had.) +- 0.0007 (theo.) consistent with the world average.Comment: 27 pages, 8 figure
Subtraction Terms for Hadronic Production Processes at Next-to-Next-to-Leading Order
I describe a subtraction scheme for the next-to-next-to-leading order
calculation of single inclusive production at hadron colliders. Such processes
include Drell-Yan, W^{+/-}, Z and Higgs Boson production. The key to such a
calculation is a treatment of initial state radiation which preserves the
production characteristics, such as the rapidity distribution, of the process
involved. The method builds upon the Dipole Formalism and, with proper
modifications, could be applied to deep inelastic scattering and e^+ e^-
annihilation to hadrons.Comment: 4 page
- …
