26,277 research outputs found
A stochastic variational framework for fitting and diagnosing generalized linear mixed models
In stochastic variational inference, the variational Bayes objective function
is optimized using stochastic gradient approximation, where gradients computed
on small random subsets of data are used to approximate the true gradient over
the whole data set. This enables complex models to be fit to large data sets as
data can be processed in mini-batches. In this article, we extend stochastic
variational inference for conjugate-exponential models to nonconjugate models
and present a stochastic nonconjugate variational message passing algorithm for
fitting generalized linear mixed models that is scalable to large data sets. In
addition, we show that diagnostics for prior-likelihood conflict, which are
useful for Bayesian model criticism, can be obtained from nonconjugate
variational message passing automatically, as an alternative to
simulation-based Markov chain Monte Carlo methods. Finally, we demonstrate that
for moderate-sized data sets, convergence can be accelerated by using the
stochastic version of nonconjugate variational message passing in the initial
stage of optimization before switching to the standard version.Comment: 42 pages, 13 figures, 9 table
Woodford goes to Africa
This paper analyses the effects of inflation shocks, demands shocks, and aid shocks on low-income, quasi-emerging-market economies, and discusses how monetary policy can be used to manage these effects. We make use of a model developed for such economies by Adam et al. (2007). We examine the e¤ects of four things which this model features, which we take to be typical of such economies. These are: the existence of a tradeables/non-tradeables production structure, the fact that international capital movements are - at least initially - confined to the effects of currency substitution by domestic residents, the use of targets for financial assets in the implementation of monetary policy, and the pursuit, in some countries, of a fixed exchange rate. We then modify the model to examine the effect on such economies of three major changes, changes which we take to be part of the transition by such economies towards more fully- fledged emerging-market status: an opening of the capital account so that uncovered- interest-parity comes to hold, a move to floating exchange rates, and the replacement of fixed stocks of financial aggregates by the pursuit of a Taylor rule in the conduct of monetary policy.currency substitution, emerging market macroeconomics, interactions between fiscal and monetary policy, Taylor rule
Effect of impact ionization in the InGaAs absorber on excess noise of avalanche photodiodes
The effects of impact ionization in the InGaAs absorption layer on the multiplication, excess noise and breakdown voltage are modeled for avalanche photodiodes (APDs), both with InP and with InAlAs multiplication regions. The calculations allow for dead space effects and for the low field electron ionization observed in InGaAs. The results confirm that impact ionization in the InGaAs absorption layer increases the excess noise in InP APDs and that the effect imposes tight constraints on the doping of the charge control layer if avalanche noise is to be minimized. However, the excess noise of InAlAs APDs is predicted to be reduced by impact ionization in the InGaAs layer. Furthermore the breakdown voltage of InAlAs APDs is less sensitive to ionization in the InGaAs layer and these results increase tolerance to doping variations in the field control layer
Adversarial Semantic Scene Completion from a Single Depth Image
We propose a method to reconstruct, complete and semantically label a 3D
scene from a single input depth image. We improve the accuracy of the regressed
semantic 3D maps by a novel architecture based on adversarial learning. In
particular, we suggest using multiple adversarial loss terms that not only
enforce realistic outputs with respect to the ground truth, but also an
effective embedding of the internal features. This is done by correlating the
latent features of the encoder working on partial 2.5D data with the latent
features extracted from a variational 3D auto-encoder trained to reconstruct
the complete semantic scene. In addition, differently from other approaches
that operate entirely through 3D convolutions, at test time we retain the
original 2.5D structure of the input during downsampling to improve the
effectiveness of the internal representation of our model. We test our approach
on the main benchmark datasets for semantic scene completion to qualitatively
and quantitatively assess the effectiveness of our proposal.Comment: 2018 International Conference on 3D Vision (3DV
- …
