1,135 research outputs found

    Cosmographic reconstruction of f(T)f(\mathcal{T}) cosmology

    Full text link
    A cosmographic reconstruction of f(T)f(\mathcal T) models is here revised in a model independent way by fixing observational bounds on the most relevant terms of the f(T)f(\mathcal T) Taylor expansion. We relate the f(T)f(\mathcal T) models and their derivatives to the cosmographic parameters and then adopt a Monte Carlo analysis. The experimental bounds are thus independent of the choice of a particular f(T)f(\mathcal T) model. The advantage of such an analysis lies on constraining the dynamics of the universe by reconstructing the form of f(T)f(\mathcal T), without any further assumptions apart from the validity of the cosmological principle and the analyticity of the f(T)f(\mathcal T) function. The main result is to fix model independent cosmographic constraints on the functional form of f(T)f(\mathcal T) which are compatible with the theoretical predictions. Furthermore, we infer a phenomenological expression for f(T)f(\mathcal T), compatible with the current cosmographic bounds and show that small deviations are expected from a constant f(T)f(\mathcal T) term, indicating that the equation of state of dark energy could slightly evolve from the one of the Λ\LambdaCDM model.Comment: Accepted in Phys. Rev.

    q-means: A quantum algorithm for unsupervised machine learning

    Full text link
    Quantum machine learning is one of the most promising applications of a full-scale quantum computer. Over the past few years, many quantum machine learning algorithms have been proposed that can potentially offer considerable speedups over the corresponding classical algorithms. In this paper, we introduce q-means, a new quantum algorithm for clustering which is a canonical problem in unsupervised machine learning. The qq-means algorithm has convergence and precision guarantees similar to kk-means, and it outputs with high probability a good approximation of the kk cluster centroids like the classical algorithm. Given a dataset of NN dd-dimensional vectors viv_i (seen as a matrix VRN×d)V \in \mathbb{R}^{N \times d}) stored in QRAM, the running time of q-means is O~(kdηδ2κ(V)(μ(V)+kηδ)+k2η1.5δ2κ(V)μ(V))\widetilde{O}\left( k d \frac{\eta}{\delta^2}\kappa(V)(\mu(V) + k \frac{\eta}{\delta}) + k^2 \frac{\eta^{1.5}}{\delta^2} \kappa(V)\mu(V) \right) per iteration, where κ(V)\kappa(V) is the condition number, μ(V)\mu(V) is a parameter that appears in quantum linear algebra procedures and η=maxivi2\eta = \max_{i} ||v_{i}||^{2}. For a natural notion of well-clusterable datasets, the running time becomes O~(k2dη2.5δ3+k2.5η2δ3)\widetilde{O}\left( k^2 d \frac{\eta^{2.5}}{\delta^3} + k^{2.5} \frac{\eta^2}{\delta^3} \right) per iteration, which is linear in the number of features dd, and polynomial in the rank kk, the maximum square norm η\eta and the error parameter δ\delta. Both running times are only polylogarithmic in the number of datapoints NN. Our algorithm provides substantial savings compared to the classical kk-means algorithm that runs in time O(kdN)O(kdN) per iteration, particularly for the case of large datasets

    Updated constraints on f(R)f(\mathcal{R}) gravity from cosmography

    Full text link
    We address the issue of constraining the class of f(R)f(\mathcal{R}) able to reproduce the observed cosmological acceleration, by using the so called cosmography of the universe. We consider a model independent procedure to build up a f(z)f(z)-series in terms of the measurable cosmographic coefficients; we therefore derive cosmological late time bounds on f(z)f(z) and its derivatives up to the fourth order, by fitting the luminosity distance directly in terms of such coefficients. We perform a Monte Carlo analysis, by using three different statistical sets of cosmographic coefficients, in which the only assumptions are the validity of the cosmological principle and that the class of f(R)f(\mathcal{R}) reduces to Λ\LambdaCDM when z1z\ll1. We use the updated union 2.1 for supernovae Ia, the constrain on the H0H_0 value imposed by the measurements of the Hubble space telescope and the Hubble dataset, with measures of HH at different zz. We find a statistical good agreement of the f(R)f(\mathcal{R}) class under exam, with the cosmological data; we thus propose a candidate of f(R)f(\mathcal{R}), which is able to pass our cosmological test, reproducing the late time acceleration in agreement with observations.Comment: 10 pages, 9 figures, accepted for publication in Phys. Rev.

    Precision cosmology with Pad\'e rational approximations: theoretical predictions versus observational limits

    Full text link
    We propose a novel approach for parameterizing the luminosity distance, based on the use of rational "Pad\'e" approximations. This new technique extends standard Taylor treatments, overcoming possible convergence issues at high redshifts plaguing standard cosmography. Indeed, we show that Pad\'e expansions enable us to confidently use data over a larger interval with respect to the usual Taylor series. To show this property in detail, we propose several Pad\'e expansions and we compare these approximations with cosmic data, thus obtaining cosmographic bounds from the observable universe for all cases. In particular, we fit Pad\'e luminosity distances with observational data from different uncorrelated surveys. We employ union 2.1 supernova data, baryonic acoustic oscillation, Hubble space telescope measurements and differential age data. In so doing, we also demonstrate that the use of Pad\'e approximants can improve the analyses carried out by introducing cosmographic auxiliary variables, i.e. a standard technique usually employed in cosmography in order to overcome the divergence problem. Moreover, for any drawback related to standard cosmography, we emphasize possible resolutions in the framework of Pad\'e approximants. In particular, we investigate how to reduce systematics, how to overcome the degeneracy between cosmological coefficients, how to treat divergences and so forth. As a result, we show that cosmic bounds are actually refined through the use of Pad\'e treatments and the thus derived best values of the cosmographic parameters show slight departures from the standard cosmological paradigm. Although all our results are perfectly consistent with the Λ\LambdaCDM model, evolving dark energy components different from a pure cosmological constant are not definitively ruled out.Comment: 24 pages, 9 figure

    Quantum algorithms for spectral sums

    Full text link
    We propose new quantum algorithms for estimating spectral sums of positive semi-definite (PSD) matrices. The spectral sum of an PSD matrix AA, for a function ff, is defined as Tr[f(A)]=jf(λj) \text{Tr}[f(A)] = \sum_j f(\lambda_j), where λj\lambda_j are the eigenvalues of AA. Typical examples of spectral sums are the von Neumann entropy, the trace of A1A^{-1}, the log-determinant, and the Schatten pp-norm, where the latter does not require the matrix to be PSD. The current best classical randomized algorithms estimating these quantities have a runtime that is at least linearly in the number of nonzero entries of the matrix and quadratic in the estimation error. Assuming access to a block-encoding of a matrix, our algorithms are sub-linear in the matrix size, and depend at most quadratically on other parameters, like the condition number and the approximation error, and thus can compete with most of the randomized and distributed classical algorithms proposed in the literature, and polynomially improve the runtime of other quantum algorithms proposed for the same problems. We show how the algorithms and techniques used in this work can be applied to three problems in spectral graph theory: approximating the number of triangles, the effective resistance, and the number of spanning trees within a graph

    SUPPORTO ALLA PROGETTAZIONE DI IMPIANTI ORC: OTTIMIZZAZIONE MULTI-OBIETTIVO

    Get PDF
    La sfida ai cambiamenti climatici ed alla riduzione delle emissioni di gas serra ha come obiettivo il contenimento dell’innalzamento della temperatura media terrestre entro i 2 °C dovrà essere raggiunto contestualmente ad una dinamica di aumento della popolazione mondiale. Per poter raggiungere tale obiettivo una delle aree di intervento e l'efficienza energetica. E’ evidente che il fattore costo dei singoli interventi di efficienza energetica è cruciale, soprattutto in uno scenario di bassi prezzi dell’energia quale quello che il mondo attualmente vive. E’ pertanto molto importante valutare la soluzione tecnologica specifica alla luce delle due variabili performance e costi (sia di capitale che operativi), al fine di selezionare la tecnologia o il pacchetto di tecnologia che offre il risparmio energetico maggiore ai costi più bassi possibili. Per tale ragione il lavoro di tesi presenta la realizzazione di un modello numerico che tramite ottimizzazione multi-obiettivo possa diventare uno strumento utile per la progettazione e per la scelta della soluzione migliore di impianti ORC da applicare per ciascun caso specifico
    corecore