135 research outputs found
Core and 'crust': Consumer prices and the term structure of interest rates
We propose a no-arbitrage model that jointly explains the dynamics of consumer prices as well as the nominal and real term structures of risk-free rates. In our framework, distinct core, food, and energy price series combine into a measure of total inflation to price nominal Treasuries. This approach captures different frequencies in inflation fluctuations: Shocks to core are more persistent and less volatile than shocks to food and, especially, energy (the 'crust'). We find that a common structure of latent factors determines and predicts the term structure of yields and inflation. The model outperforms popular benchmarks and is at par with the Survey of Professional Forecasters in forecasting inflation. Real rates implied by our model uncover the presence of a time-varying component in TIPS yields that we attribute to disruptions in the inflation-indexed bond market. Finally, we find a pronounced declining pattern in the inflation risk premium that illustrates the changing nature of inflation risk in nominal Treasuries
Financial intermediation, investment dynamics and business cycle fluctuations
How important are financial friction shocks in business cycles fluctuations? To answer this question, I use micro data to quantify key features of US financial markets. I then construct a dynamic equilibrium model that is consistent with these features and fit the model to business cycle data using Bayesian methods. In my micro data analysis, I establish facts that may be of independent interest. For example, I find that a substantial 35% of firm investment is funded using financial markets. The dynamic model introduces price and wage rigidities and a financial intermediation shock into Kiyotaki and Moore (2008). According to the estimated model, the financial intermediation shock explains around 35% of GDP and 60% of investment volatility. The estimation assigns such a large role to the financial shock for two reasons: (i) the shock is closely related to the interest rate spread, and this spread is strongly countercyclical and (ii) according to the model, the response in consumption, investment, employment and asset prices to a financial shock resembles the behavior of these variables over the business cycle
Compton-thick AGN in the NuSTAR era III: A systematic study of the torus covering factor
We present the analysis of a sample of 35 candidate Compton thick (CT-)
active galactic nuclei (AGNs) selected in the nearby Universe (average redshift
~0.03) with the Swift-BAT 100-month survey. All sources have available
NuSTAR data, thus allowing us to constrain with unprecedented quality important
spectral parameters such as the obscuring torus line-of-sight column density
(N_{H, z}), the average torus column density (N_{H, tor}) and the torus
covering factor (f_c). We compare the best-fit results obtained with the widely
used MyTorus (Murphy et al. 2009) model with those of the recently published
borus02 model (Balokovic et al. 2018) used in the same geometrical
configuration of MyTorus (i.e., with f_c=0.5). We find a remarkable agreement
between the two, although with increasing dispersion in N_{H, z} moving towards
higher column densities. We then use borus02 to measure f_c. High-f_c sources
have, on average, smaller offset between N_{H, z} and N_{H, tor} than low-f_c
ones. Therefore, low f_c values can be linked to a "patchy torus" scenario,
where the AGN is seen through an over-dense region in the torus, while high-f_c
objects are more likely to be obscured by a more uniform gas distribution.
Finally, we find potential evidence of an inverse trend between f_c and the AGN
2-10 keV luminosity, i.e., sources with higher f_c values have on average lower
luminosities.Comment: 35 Pages, 23 Figures. Accepted for publication in Ap
Financial intermediation, investment dynamics and business cycle fluctuations
How important are financial friction shocks in business cycles fluctuations? To answer this question, I use micro data to quantify key features of US financial markets. I then construct a dynamic equilibrium model that is consistent with these features and fit the model to business cycle data using Bayesian methods. In my micro data analysis, I establish facts that may be of independent interest. For example, I find that a substantial 33% of firm investment is funded using financial markets. The dynamic model introduces price and wage rigidities and a financial intermediation shock into Kiyotaki and Moore (2008). According to the estimated model, the financial intermediation shock explains around 40% of GDP and 55% of investment volatility. The estimation assigns such a large role to the financial shock for two reasons: (i) the shock is closely related to the interest rate spread, and this spread is strongly countercyclical and (ii) according to the model, the response in consumption, investment, employment and asset prices to a financial shock resembles the behavior of these variables over the business cycle
Financial intermediation, investment dynamics and business cycle fluctuations
How important are financial friction shocks in business cycles fluctuations? To answer this question, I use micro data to quantify key features of US financial markets. I then construct a dynamic equilibrium model that is consistent with these features and fit the model to business cycle data using Bayesian methods. In my micro data analysis, I establish facts that may be of independent interest. For example, I find that a substantial 35% of firm investment is funded using financial markets. The dynamic model introduces price and wage rigidities and a financial intermediation shock into Kiyotaki and Moore (2008). According to the estimated model, the financial intermediation shock explains around 35% of GDP and 60% of investment volatility. The estimation assigns such a large role to the financial shock for two reasons: (i) the shock is closely related to the interest rate spread, and this spread is strongly countercyclical and (ii) according to the model, the response in consumption, investment, employment and asset prices to a financial shock resembles the behavior of these variables over the business cycle
Sensitivity Projections for Dark Matter Searches with the Fermi Large Area Telescope
The nature of dark matter is a longstanding enigma of physics; it may consist
of particles beyond the Standard Model that are still elusive to experiments.
Among indirect search techniques, which look for stable products from the
annihilation or decay of dark matter particles, or from axions coupling to
high-energy photons, observations of the -ray sky have come to
prominence over the last few years, because of the excellent sensitivity of the
Large Area Telescope (LAT) on the Fermi Gamma-ray Space Telescope mission. The
LAT energy range from 20 MeV to above 300 GeV is particularly well suited for
searching for products of the interactions of dark matter particles. In this
report we describe methods used to search for evidence of dark matter with the
LAT, and review the status of searches performed with up to six years of LAT
data. We also discuss the factors that determine the sensitivities of these
searches, including the magnitudes of the signals and the relevant backgrounds,
considering both statistical and systematic uncertainties. We project the
expected sensitivities of each search method for 10 and 15 years of LAT data
taking. In particular, we find that the sensitivity of searches targeting dwarf
galaxies, which provide the best limits currently, will improve faster than the
square root of observing time. Current LAT limits for dwarf galaxies using six
years of data reach the thermal relic level for masses up to 120 GeV for the
annihilation channel for reasonable dark matter density profiles.
With projected discoveries of additional dwarfs, these limits could extend to
about 250 GeV. With as much as 15 years of LAT data these searches would be
sensitive to dark matter annihilations at the thermal relic cross section for
masses to greater than 400 GeV (200 GeV) in the ()
annihilation channels.Comment: Updated with a few additional and corrected references; otherwise,
text is identical to previous version. Submitted on behalf of the Fermi-LAT
collaboration. Accepted for publication in Physics Reports, 59 pages, 34
figures; corresponding author: Eric Charles ([email protected]
The Chandra COSMOS Legacy Survey : Energy Spectrum of the Cosmic X-Ray Background and Constraints on Undetected Populations
Using Chandra observations in the 2.15 deg(2) COSMOS-legacy field, we present one of the most accurate measurements of the Cosmic X-ray Background (CXB) spectrum to date in the [0.3-7] keV energy band. The CXB has three distinct components: contributions from two Galactic collisional thermal plasmas at kT similar to 0.27 and 0.07 keV and an extragalactic power law with a photon spectral index Gamma = 1.45 +/- 0.02. The 1 keV normalization of the extragalactic component is 10.91 +/- 0.16 keV cm(-2) s(-1) sr(-1) keV(-1). Removing all X-ray-detected sources, the remaining unresolved CXB is best fit by a power law with normalization 4.18 +/- 0.26 keV cm(-2) s(-1) sr(-1) keV(-1) and photon spectral index Gamma = 1.57 +/- 0.10. Removing faint galaxies down to i(AB) similar to 27-28 leaves a hard spectrum with Gamma similar to 1.25 and a 1 keV normalization of similar to 1.37 keV cm(-2) s(-1) sr(-1) keV(-1). This means that similar to 91% of the observed CXB is resolved into detected X-ray sources and undetected galaxies. Unresolved sources that contribute similar to 8%-9% of the total CXB show marginal evidence of being harder and possibly more obscured than resolved sources. Another similar to 1% of the CXB can be attributed to still undetected star-forming galaxies and absorbed active galactic nuclei. According to these limits, we investigate a scenario where early black holes totally account for non-source CXB fraction and constrain some of their properties. In order to not exceed the remaining CXB and the z similar to 6 accreted mass density, such a population of black holes must grow in Compton-thick envelopes with N-H > 1.6 x 10(25) cm(-2) and form in extremely low-metallicity environments (Z(circle dot)) similar to 10(-3).Peer reviewe
Compton thick AGN in the NuSTAR era
We present the 2-100 keV spectral analysis of 30 candidate Compton thick
(CT-) active galactic nuclei (AGN) selected in the Swift-BAT 100-month survey.
The average redshift of these objects is 0.03 and they
all lie within 500 Mpc. We used the MyTorus (Murphy et al. 2009) model to
perform X-ray spectral fitting both without and with the contribution of the
NuSTAR data in the 3-50 keV energy range. When the NuSTAR data are added to the
fit, 14 out of 30 of these objects (47% of the whole sample) have intrinsic
absorption N3 confidence level,
i.e., they are re-classified from Compton thick to Compton thin. Consequently,
we infer an overall observed fraction of CT-AGN with respect to the whole AGN
population lower than the one reported in previous works, and as low as
4%. We find evidence that this over-estimation of N is likely
due to the low quality of a subsample of spectra, either in the 2-10 keV band
or in the Swift-BAT one.Comment: 19 pages, 10 figures, accepted for publication on the Astrophysical
Journa
- …
