1,135 research outputs found
Cosmographic reconstruction of cosmology
A cosmographic reconstruction of models is here revised in a
model independent way by fixing observational bounds on the most relevant terms
of the Taylor expansion. We relate the models
and their derivatives to the cosmographic parameters and then adopt a Monte
Carlo analysis. The experimental bounds are thus independent of the choice of a
particular model. The advantage of such an analysis lies on
constraining the dynamics of the universe by reconstructing the form of
, without any further assumptions apart from the validity of the
cosmological principle and the analyticity of the function. The
main result is to fix model independent cosmographic constraints on the
functional form of which are compatible with the theoretical
predictions. Furthermore, we infer a phenomenological expression for
, compatible with the current cosmographic bounds and show that
small deviations are expected from a constant term, indicating
that the equation of state of dark energy could slightly evolve from the one of
the CDM model.Comment: Accepted in Phys. Rev.
q-means: A quantum algorithm for unsupervised machine learning
Quantum machine learning is one of the most promising applications of a
full-scale quantum computer. Over the past few years, many quantum machine
learning algorithms have been proposed that can potentially offer considerable
speedups over the corresponding classical algorithms. In this paper, we
introduce q-means, a new quantum algorithm for clustering which is a canonical
problem in unsupervised machine learning. The -means algorithm has
convergence and precision guarantees similar to -means, and it outputs with
high probability a good approximation of the cluster centroids like the
classical algorithm. Given a dataset of -dimensional vectors (seen
as a matrix stored in QRAM, the running time
of q-means is per iteration, where is the condition number, is
a parameter that appears in quantum linear algebra procedures and . For a natural notion of well-clusterable datasets, the
running time becomes per iteration, which is linear in the
number of features , and polynomial in the rank , the maximum square norm
and the error parameter . Both running times are only
polylogarithmic in the number of datapoints . Our algorithm provides
substantial savings compared to the classical -means algorithm that runs in
time per iteration, particularly for the case of large datasets
Updated constraints on gravity from cosmography
We address the issue of constraining the class of able to
reproduce the observed cosmological acceleration, by using the so called
cosmography of the universe. We consider a model independent procedure to build
up a -series in terms of the measurable cosmographic coefficients; we
therefore derive cosmological late time bounds on and its derivatives up
to the fourth order, by fitting the luminosity distance directly in terms of
such coefficients. We perform a Monte Carlo analysis, by using three different
statistical sets of cosmographic coefficients, in which the only assumptions
are the validity of the cosmological principle and that the class of
reduces to CDM when . We use the updated union
2.1 for supernovae Ia, the constrain on the value imposed by the
measurements of the Hubble space telescope and the Hubble dataset, with
measures of at different . We find a statistical good agreement of the
class under exam, with the cosmological data; we thus propose
a candidate of , which is able to pass our cosmological test,
reproducing the late time acceleration in agreement with observations.Comment: 10 pages, 9 figures, accepted for publication in Phys. Rev.
Precision cosmology with Pad\'e rational approximations: theoretical predictions versus observational limits
We propose a novel approach for parameterizing the luminosity distance, based
on the use of rational "Pad\'e" approximations. This new technique extends
standard Taylor treatments, overcoming possible convergence issues at high
redshifts plaguing standard cosmography. Indeed, we show that Pad\'e expansions
enable us to confidently use data over a larger interval with respect to the
usual Taylor series. To show this property in detail, we propose several Pad\'e
expansions and we compare these approximations with cosmic data, thus obtaining
cosmographic bounds from the observable universe for all cases. In particular,
we fit Pad\'e luminosity distances with observational data from different
uncorrelated surveys. We employ union 2.1 supernova data, baryonic acoustic
oscillation, Hubble space telescope measurements and differential age data. In
so doing, we also demonstrate that the use of Pad\'e approximants can improve
the analyses carried out by introducing cosmographic auxiliary variables, i.e.
a standard technique usually employed in cosmography in order to overcome the
divergence problem. Moreover, for any drawback related to standard cosmography,
we emphasize possible resolutions in the framework of Pad\'e approximants. In
particular, we investigate how to reduce systematics, how to overcome the
degeneracy between cosmological coefficients, how to treat divergences and so
forth. As a result, we show that cosmic bounds are actually refined through the
use of Pad\'e treatments and the thus derived best values of the cosmographic
parameters show slight departures from the standard cosmological paradigm.
Although all our results are perfectly consistent with the CDM model,
evolving dark energy components different from a pure cosmological constant are
not definitively ruled out.Comment: 24 pages, 9 figure
Quantum algorithms for spectral sums
We propose new quantum algorithms for estimating spectral sums of positive
semi-definite (PSD) matrices. The spectral sum of an PSD matrix , for a
function , is defined as , where
are the eigenvalues of . Typical examples of spectral sums are
the von Neumann entropy, the trace of , the log-determinant, and the
Schatten -norm, where the latter does not require the matrix to be PSD. The
current best classical randomized algorithms estimating these quantities have a
runtime that is at least linearly in the number of nonzero entries of the
matrix and quadratic in the estimation error. Assuming access to a
block-encoding of a matrix, our algorithms are sub-linear in the matrix size,
and depend at most quadratically on other parameters, like the condition number
and the approximation error, and thus can compete with most of the randomized
and distributed classical algorithms proposed in the literature, and
polynomially improve the runtime of other quantum algorithms proposed for the
same problems. We show how the algorithms and techniques used in this work can
be applied to three problems in spectral graph theory: approximating the number
of triangles, the effective resistance, and the number of spanning trees within
a graph
SUPPORTO ALLA PROGETTAZIONE DI IMPIANTI ORC: OTTIMIZZAZIONE MULTI-OBIETTIVO
La sfida ai cambiamenti climatici ed alla riduzione delle emissioni di gas serra ha come obiettivo il contenimento dell’innalzamento della temperatura media terrestre entro i 2 °C dovrà essere raggiunto contestualmente ad una dinamica di aumento della popolazione mondiale.
Per poter raggiungere tale obiettivo una delle aree di intervento e l'efficienza energetica.
E’ evidente che il fattore costo dei singoli interventi di efficienza energetica è cruciale, soprattutto in uno scenario di bassi prezzi dell’energia quale quello che il mondo attualmente vive.
E’ pertanto molto importante valutare la soluzione tecnologica specifica alla luce delle due variabili performance e costi (sia di capitale che operativi), al fine di selezionare la tecnologia o il pacchetto di tecnologia che offre il risparmio energetico maggiore ai costi più bassi possibili.
Per tale ragione il lavoro di tesi presenta la realizzazione di un modello numerico che tramite ottimizzazione multi-obiettivo possa diventare uno strumento utile per la progettazione e per la scelta della soluzione migliore di impianti ORC da applicare per ciascun caso specifico
- …
