5,493 research outputs found
Massive Higher Spin Fields Coupled to a Scalar: Aspects of Interaction and Causality
We consider in detail the most general cubic Lagrangian which describes an
interaction between two identical higher spin fieldsin a triplet formulation
with a scalar field, all fields having the same values of the mass. After
performing the gauge fixing procedure we find that for the case of massive
fields the gauge invariance does not guarantee the preservation of the correct
number of propagating physical degrees of freedom. In order to get the correct
number of degrees of freedom for the massive higher spin field one should
impose some additional conditions on parameters of the vertex. Further
independent constraints are provided by the causality analysis, indicating that
the requirement of causality should be imposed in addition to the requirement
of gauge invariance in order to have a consistent propagation of massive higher
spin fields.Comment: 34 pages, comments, references and one Appendix added. Typos
corrected. Published versio
A generalized Fellner-Schall method for smoothing parameter estimation with application to Tweedie location, scale and shape models
We consider the estimation of smoothing parameters and variance components in
models with a regular log likelihood subject to quadratic penalization of the
model coefficients, via a generalization of the method of Fellner (1986) and
Schall (1991). In particular: (i) we generalize the original method to the case
of penalties that are linear in several smoothing parameters, thereby covering
the important cases of tensor product and adaptive smoothers; (ii) we show why
the method's steps increase the restricted marginal likelihood of the model,
that it tends to converge faster than the EM algorithm, or obvious
accelerations of this, and investigate its relation to Newton optimization;
(iii) we generalize the method to any Fisher regular likelihood. The method
represents a considerable simplification over existing methods of estimating
smoothing parameters in the context of regular likelihoods, without sacrificing
generality: for example, it is only necessary to compute with the same first
and second derivatives of the log-likelihood required for coefficient
estimation, and not with the third or fourth order derivatives required by
alternative approaches. Examples are provided which would have been impossible
or impractical with pre-existing Fellner-Schall methods, along with an example
of a Tweedie location, scale and shape model which would be a challenge for
alternative methods
Designing a Belief Function-Based Accessibility Indicator to Improve Web Browsing for Disabled People
The purpose of this study is to provide an accessibility measure of
web-pages, in order to draw disabled users to the pages that have been designed
to be ac-cessible to them. Our approach is based on the theory of belief
functions, using data which are supplied by reports produced by automatic web
content assessors that test the validity of criteria defined by the WCAG 2.0
guidelines proposed by the World Wide Web Consortium (W3C) organization. These
tools detect errors with gradual degrees of certainty and their results do not
always converge. For these reasons, to fuse information coming from the
reports, we choose to use an information fusion framework which can take into
account the uncertainty and imprecision of infor-mation as well as divergences
between sources. Our accessibility indicator covers four categories of
deficiencies. To validate the theoretical approach in this context, we propose
an evaluation completed on a corpus of 100 most visited French news websites,
and 2 evaluation tools. The results obtained illustrate the interest of our
accessibility indicator
Evidential-EM Algorithm Applied to Progressively Censored Observations
Evidential-EM (E2M) algorithm is an effective approach for computing maximum
likelihood estimations under finite mixture models, especially when there is
uncertain information about data. In this paper we present an extension of the
E2M method in a particular case of incom-plete data, where the loss of
information is due to both mixture models and censored observations. The prior
uncertain information is expressed by belief functions, while the
pseudo-likelihood function is derived based on imprecise observations and prior
knowledge. Then E2M method is evoked to maximize the generalized likelihood
function to obtain the optimal estimation of parameters. Numerical examples
show that the proposed method could effectively integrate the uncertain prior
infor-mation with the current imprecise knowledge conveyed by the observed
data
X-ray Lighthouses of the High-Redshift Universe. II. Further Snapshot Observations of the Most Luminous z>4 Quasars with Chandra
We report on Chandra observations of a sample of 11 optically luminous
(Mb<-28.5) quasars at z=3.96-4.55 selected from the Palomar Digital Sky Survey
and the Automatic Plate Measuring Facility Survey. These are among the most
luminous z>4 quasars known and hence represent ideal witnesses of the end of
the "dark age ''. Nine quasars are detected by Chandra, with ~2-57 counts in
the observed 0.5-8 keV band. These detections increase the number of X-ray
detected AGN at z>4 to ~90; overall, Chandra has detected ~85% of the
high-redshift quasars observed with snapshot (few kilosecond) observations. PSS
1506+5220, one of the two X-ray undetected quasars, displays a number of
notable features in its rest-frame ultraviolet spectrum, the most prominent
being broad, deep SiIV and CIV absorption lines. The average optical-to-X-ray
spectral index for the present sample (=-1.88+/-0.05) is steeper than
that typically found for z>4 quasars but consistent with the expected value
from the known dependence of this spectral index on quasar luminosity.
We present joint X-ray spectral fitting for a sample of 48 radio-quiet
quasars in the redshift range 3.99-6.28 for which Chandra observations are
available. The X-ray spectrum (~870 counts) is well parameterized by a power
law with Gamma=1.93+0.10/-0.09 in the rest-frame ~2-40 keV band, and a tight
upper limit of N_H~5x10^21 cm^-2 is obtained on any average intrinsic X-ray
absorption. There is no indication of any significant evolution in the X-ray
properties of quasars between redshifts zero and six, suggesting that the
physical processes of accretion onto massive black holes have not changed over
the bulk of cosmic time.Comment: 15 pages, 7 figures, accepted for publication in A
Diagonal and Low-Rank Matrix Decompositions, Correlation Matrices, and Ellipsoid Fitting
In this paper we establish links between, and new results for, three problems
that are not usually considered together. The first is a matrix decomposition
problem that arises in areas such as statistical modeling and signal
processing: given a matrix formed as the sum of an unknown diagonal matrix
and an unknown low rank positive semidefinite matrix, decompose into these
constituents. The second problem we consider is to determine the facial
structure of the set of correlation matrices, a convex set also known as the
elliptope. This convex body, and particularly its facial structure, plays a
role in applications from combinatorial optimization to mathematical finance.
The third problem is a basic geometric question: given points
(where ) determine whether there is a centered
ellipsoid passing \emph{exactly} through all of the points.
We show that in a precise sense these three problems are equivalent.
Furthermore we establish a simple sufficient condition on a subspace that
ensures any positive semidefinite matrix with column space can be
recovered from for any diagonal matrix using a convex
optimization-based heuristic known as minimum trace factor analysis. This
result leads to a new understanding of the structure of rank-deficient
correlation matrices and a simple condition on a set of points that ensures
there is a centered ellipsoid passing through them.Comment: 20 page
Efficient Bayesian Inference for Learning in the Ising Linear Perceptron and Signal Detection in CDMA
Efficient new Bayesian inference technique is employed for studying critical
properties of the Ising linear perceptron and for signal detection in Code
Division Multiple Access (CDMA). The approach is based on a recently introduced
message passing technique for densely connected systems. Here we study both
critical and non-critical regimes. Results obtained in the non-critical regime
give rise to a highly efficient signal detection algorithm in the context of
CDMA; while in the critical regime one observes a first order transition line
that ends in a continuous phase transition point. Finite size effects are also
studied.Comment: 11 pages, 3 figure
SACOC: A spectral-based ACO clustering algorithm
The application of ACO-based algorithms in data mining is growing over the last few years and several supervised and unsupervised learning algorithms have been developed using this bio-inspired approach. Most recent works concerning unsupervised learning have been focused on clustering, where ACO-based techniques have showed a great potential. At the same time, new clustering techniques that seek the continuity of data, specially focused on spectral-based approaches in opposition to classical centroid-based approaches, have attracted an increasing research interest–an area still under study by ACO clustering techniques. This work presents a hybrid spectral-based ACO clustering algorithm inspired by the ACO Clustering (ACOC) algorithm. The proposed approach combines ACOC with the spectral Laplacian to generate a new search space for the algorithm in order to obtain more promising solutions. The new algorithm, called SACOC, has been compared against well-known algorithms (K-means and Spectral Clustering) and with ACOC. The experiments measure the accuracy of the algorithm for both synthetic datasets and real-world datasets extracted from the UCI Machine Learning Repository
- …
