325,034 research outputs found
Nonparametric IV estimation of shape-invariant Engel curves
This paper concerns the identification and estimation of a shape-invariant Engel
curve system with endogenous total expenditure. The shape-invariant specification
involves a common shift parameter for each demographic group in a pooled
system of Engel curves. Our focus is on the identification and estimation of both
the nonparametric shape of the Engel curve and the parametric specification of the
demographic scaling parameters. We present a new identification condition, closely
related to the concept of bounded completeness in statistics. The estimation procedure
applies the sieve minimum distance estimation of conditional moment restrictions
allowing for endogeneity. We establish a new root mean squared convergence
rate for the nonparametric IV regression when the endogenous regressor has unbounded
support. Root-n asymptotic normality and semiparametric efficiency of
the parametric components are also given under a set of ‘low-level’ sufficient conditions.
Monte Carlo simulations shed lights on the choice of smoothing parameters
and demonstrate that the sieve IV estimator performs well. An application is made
to the estimation of Engel curves using the UK Family Expenditure Survey and
shows the importance of adjusting for endogeneity in terms of both the curvature
and demographic parameters of systems of Engel curves
Probing spin entanglement by gate-voltage-controlled interference of current correlation in quantum spin Hall insulators
We propose an entanglement detector composed of two quantum spin Hall
insulators and a side gate deposited on one of the edge channels. For an ac
gate voltage, the differential noise contributed from the entangled electron
pairs exhibits the nontrivial step structures, from which the spin entanglement
concurrence can be easily obtained. The possible spin dephasing effects in the
quantum spin Hall insulators are also included.Comment: Physics Letters A in pres
Addressing business agility challenges with enterprise systems
It is clear that systems agility (i.e., having a responsive IT infrastructure that can be changed quickly to meet changing business needs) has become a critical component of organizational agility. However, skeptics continue to suggest that, despite the benefits enterprise system packages provide, they are constraining choices for firms faced with agility challenges. The reason for this skepticism is that the tight integration between different parts of the business that enables many enterprise systems\u27 benefits also increases the systems\u27 complexity, and this increased complexity, say the skeptics, increases the difficulty of changing systems when business needs change. These persistent concerns motivated us to conduct a series of interviews with business and IT managers in 15 firms to identify how they addressed, in total, 57 different business agility challenges. Our analysis suggests that when the challenges involved an enterprise system, firms were able to address a high percentage of their challenges with four options that avoid the difficulties associated with changing the complex core system: capabilities already built-in to the package but not previously used, leveraging globally consistent integrated data already available, using add-on systems available on the market that easily interfaced with the existing enterprise system, and vendor provided patches that automatically updated the code. These findings have important implications for organizations with and without enterprise system architectures
Why are some BL Lacs detected by \fermi, but others not ?
By cross-correlating an archival sample of 170 BL Lacs with 2 year \fermilat
AGN sample, we have compiled a sample of 100 BL Lacs with \fermi detection
(FBLs), and a sample of 70 non-\fermi BL Lacs (NFBLs). We compared various
parameters of FBLs with those of NFBLs, including the redshift, the low
frequency radio luminosity at 408 MHz (), the absolute
magnitude of host galaxies (), the polarization fraction from
NVSS survey (), the observed arcsecond scale radio core flux at 5
GHz () and jet Doppler factor; all the parameters are directly
\textbf{measured} or derived from available data from literatures. We found
that the Doppler factor is on average larger in FBLs than in NFBLs, and the
-ray detection rate is higher in sources with higher Doppler
factor. In contrast, there are no significant differences in terms of the
intrinsic parameters of redshift, , and . FBLs seem to have a higher probability of exhibiting measurable
proper motion. These results strongly indicate a higher beaming effect in FBLs
compared to NFBLs. The radio core flux is found to be strongly correlated with
-ray flux, which remains after excluding the common dependence of the
Doppler factor. At the fixed Doppler factor, FBLs have systematically larger
radio core flux than NFBLs, implying lower -ray emission in NFBLs since
the radio and -ray flux are significantly correlated. Our results
indicate that the Doppler factor is an important parameter of -ray
detection, the non-detection of -ray emission in NFBLs is likely due to
low beaming effect, and/or low intrinsic -ray flux, and the gamma-rays
are likely produced co-spatially with the arcsecond-scale radio core radiation
and mainly through the SSC process.Comment: 6 pages, 6 figures, accepted by A&
Variable-Length Coding with Feedback: Finite-Length Codewords and Periodic Decoding
Theoretical analysis has long indicated that feedback improves the error
exponent but not the capacity of single-user memoryless channels. Recently
Polyanskiy et al. studied the benefit of variable-length feedback with
termination (VLFT) codes in the non-asymptotic regime. In that work,
achievability is based on an infinite length random code and decoding is
attempted at every symbol. The coding rate backoff from capacity due to channel
dispersion is greatly reduced with feedback, allowing capacity to be approached
with surprisingly small expected latency. This paper is mainly concerned with
VLFT codes based on finite-length codes and decoding attempts only at certain
specified decoding times. The penalties of using a finite block-length and
a sequence of specified decoding times are studied. This paper shows that
properly scaling with the expected latency can achieve the same performance
up to constant terms as with . The penalty introduced by periodic
decoding times is a linear term of the interval between decoding times and
hence the performance approaches capacity as the expected latency grows if the
interval between decoding times grows sub-linearly with the expected latency.Comment: 8 pages. A shorten version is submitted to ISIT 201
A Lattice Boltzmann method for simulations of liquid-vapor thermal flows
We present a novel lattice Boltzmann method that has a capability of
simulating thermodynamic multiphase flows. This approach is fully
thermodynamically consistent at the macroscopic level. Using this new method, a
liquid-vapor boiling process, including liquid-vapor formation and coalescence
together with a full coupling of temperature, is simulated for the first time.Comment: one gzipped tar file, 19 pages, 4 figure
A Rate-Compatible Sphere-Packing Analysis of Feedback Coding with Limited Retransmissions
Recent work by Polyanskiy et al. and Chen et al. has excited new interest in
using feedback to approach capacity with low latency. Polyanskiy showed that
feedback identifying the first symbol at which decoding is successful allows
capacity to be approached with surprisingly low latency. This paper uses Chen's
rate-compatible sphere-packing (RCSP) analysis to study what happens when
symbols must be transmitted in packets, as with a traditional hybrid ARQ
system, and limited to relatively few (six or fewer) incremental transmissions.
Numerical optimizations find the series of progressively growing cumulative
block lengths that enable RCSP to approach capacity with the minimum possible
latency. RCSP analysis shows that five incremental transmissions are sufficient
to achieve 92% of capacity with an average block length of fewer than 101
symbols on the AWGN channel with SNR of 2.0 dB.
The RCSP analysis provides a decoding error trajectory that specifies the
decoding error rate for each cumulative block length. Though RCSP is an
idealization, an example tail-biting convolutional code matches the RCSP
decoding error trajectory and achieves 91% of capacity with an average block
length of 102 symbols on the AWGN channel with SNR of 2.0 dB. We also show how
RCSP analysis can be used in cases where packets have deadlines associated with
them (leading to an outage probability).Comment: To be published at the 2012 IEEE International Symposium on
Information Theory, Cambridge, MA, USA. Updated to incorporate reviewers'
comments and add new figure
- …
