4,757 research outputs found

    Accelerating AdS black holes as the holographic heat engines in a benchmarking scheme

    Full text link
    We investigate the properties of holographic heat engines with an uncharged accelerating non-rotating AdS black hole as the working substance in a benchmarking scheme. We find that the efficiencies of the black hole heat engines can be influenced by both the size of the benchmark circular cycle and the cosmic string tension as a thermodynamic variable. In general, the efficiency can be increased by enlarging the cycle, but is still constrained by a universal bound 2π/(π+4)2\pi/(\pi+4) as expected. A cross-comparison of the efficiencies of the accelerating black hole heat engines and Schwarzschild-AdS black hole heat engines suggests that the acceleration also increases the efficiency although the amount of increase is not remarkable.Comment: 13 pages,4 figure

    An Iterative Scheme for Leverage-based Approximate Aggregation

    Full text link
    The current data explosion poses great challenges to the approximate aggregation with an efficiency and accuracy. To address this problem, we propose a novel approach to calculate the aggregation answers with a high accuracy using only a small portion of the data. We introduce leverages to reflect individual differences in the samples from a statistical perspective. Two kinds of estimators, the leverage-based estimator, and the sketch estimator (a "rough picture" of the aggregation answer), are in constraint relations and iteratively improved according to the actual conditions until their difference is below a threshold. Due to the iteration mechanism and the leverages, our approach achieves a high accuracy. Moreover, some features, such as not requiring recording the sampled data and easy to extend to various execution modes (e.g., the online mode), make our approach well suited to deal with big data. Experiments show that our approach has an extraordinary performance, and when compared with the uniform sampling, our approach can achieve high-quality answers with only 1/3 of the same sample size.Comment: 17 pages, 9 figure

    A Galerkin boundary node method and its convergence analysis

    Get PDF
    AbstractThe boundary node method (BNM) exploits the dimensionality of the boundary integral equation (BIE) and the meshless attribute of the moving least-square (MLS) approximations. However, since MLS shape functions lack the property of a delta function, it is difficult to exactly satisfy boundary conditions in BNM. Besides, the system matrices of BNM are non-symmetric.A Galerkin boundary node method (GBNM) is proposed in this paper for solving boundary value problems. In this approach, an equivalent variational form of a BIE is used for representing the governing equation, and the trial and test functions of the variational formulation are generated by the MLS approximation. As a result, boundary conditions can be implemented accurately and the system matrices are symmetric. Total details of numerical implementation and error analysis are given for a general BIE. Taking the Dirichlet problem of Laplace equation as an example, we set up a framework for error estimates of GBNM. Some numerical examples are also given to demonstrate the efficacity of the method

    On the Optimality of Tape Merge of Two Lists with Similar Size

    Get PDF
    The problem of merging sorted lists in the least number of pairwise comparisons has been solved completely only for a few special cases. Graham and Karp \cite{taocp} independently discovered that the tape merge algorithm is optimal in the worst case when the two lists have the same size. In the seminal papers, Stockmeyer and Yao\cite{yao}, Murphy and Paull\cite{3k3}, and Christen\cite{christen1978optimality} independently showed when the lists to be merged are of size mm and nn satisfying mn32m+1m\leq n\leq\lfloor\frac{3}{2}m\rfloor+1, the tape merge algorithm is optimal in the worst case. This paper extends this result by showing that the tape merge algorithm is optimal in the worst case whenever the size of one list is no larger than 1.52 times the size of the other. The main tool we used to prove lower bounds is Knuth's adversary methods \cite{taocp}. In addition, we show that the lower bound cannot be improved to 1.8 via Knuth's adversary methods. We also develop a new inequality about Knuth's adversary methods, which might be interesting in its own right. Moreover, we design a simple procedure to achieve constant improvement of the upper bounds for 2m2n3m2m-2\leq n\leq 3m
    corecore