30,601 research outputs found

    Estimating Ratios of Normalizing Constants Using Linked Importance Sampling

    Full text link
    Ratios of normalizing constants for two distributions are needed in both Bayesian statistics, where they are used to compare models, and in statistical physics, where they correspond to differences in free energy. Two approaches have long been used to estimate ratios of normalizing constants. The `simple importance sampling' (SIS) or `free energy perturbation' method uses a sample drawn from just one of the two distributions. The `bridge sampling' or `acceptance ratio' estimate can be viewed as the ratio of two SIS estimates involving a bridge distribution. For both methods, difficult problems must be handled by introducing a sequence of intermediate distributions linking the two distributions of interest, with the final ratio of normalizing constants being estimated by the product of estimates of ratios for adjacent distributions in this sequence. Recently, work by Jarzynski, and independently by Neal, has shown how one can view such a product of estimates, each based on simple importance sampling using a single point, as an SIS estimate on an extended state space. This `Annealed Importance Sampling' (AIS) method produces an exactly unbiased estimate for the ratio of normalizing constants even when the Markov transitions used do not reach equilibrium. In this paper, I show how a corresponding `Linked Importance Sampling' (LIS) method can be constructed in which the estimates for individual ratios are similar to bridge sampling estimates. I show empirically that for some problems, LIS estimates are much more accurate than AIS estimates found using the same computation time, although for other problems the two methods have similar performance. Linked sampling methods similar to LIS are useful for other purposes as well

    Representing numeric data in 32 bits while preserving 64-bit precision

    Full text link
    Data files often consist of numbers having only a few significant decimal digits, whose information content would allow storage in only 32 bits. However, we may require that arithmetic operations involving these numbers be done with 64-bit floating-point precision, which precludes simply representing the data as 32-bit floating-point values. Decimal floating point gives a compact and exact representation, but requires conversion with a slow division operation before it can be used. Here, I show that interesting subsets of 64-bit floating-point values can be compactly and exactly represented by the 32 bits consisting of the sign, exponent, and high-order part of the mantissa, with the lower-order 32 bits of the mantissa filled in by table lookup, indexed by bits from the part of the mantissa retained, and possibly from the exponent. For example, decimal data with 4 or fewer digits to the left of the decimal point and 2 or fewer digits to the right of the decimal point can be represented in this way using the lower-order 5 bits of the retained part of the mantissa as the index. Data consisting of 6 decimal digits with the decimal point in any of the 7 positions before or after one of the digits can also be represented this way, and decoded using 19 bits from the mantissa and exponent as the index. Encoding with such a scheme is a simple copy of half the 64-bit value, followed if necessary by verification that the value can be represented, by checking that it decodes correctly. Decoding requires only extraction of index bits and a table lookup. Lookup in a small table will usually reference cache; even with larger tables, decoding is still faster than conversion from decimal floating point with a division operation. I discuss how such schemes perform on recent computer systems, and how they might be used to automatically compress large arrays in interpretive languages such as R

    Microcomputer versus mainframe simulations: A case study

    Get PDF
    The research was conducted to two parts. Part one consisted of a study of the feasibility of running the Space Transportation Model simulation on an office IBM-AT. The second part was to design simulation runs so as to study the effects of certain performance factors on the execution of the simulation model. The results of this research are given in the two reports which follow: Microcomputer vs. Mainframe Simulation: A Case Study and Fractional Factorial Designs of Simulation Runs for the Space Transportation System Operations Model. In the first part, a DOS batch job was written in order to simplify the execution of the simulation model on an office microcomputer. A comparison study was then performed of running the model on NASA-Langley's mainframe computer vs. running on the IBM-AT microcomputer. This was done in order to find the advantages and disadvantages of running the model on each machine with the objective of determining if running of the office PC was practical. The study concluded that it was. The large number of performance parameters in the Space Transportation model precluded running a full factorial design needed to determine the most significant design factors. The second report gives several suggested fractional factorial designs which require far fewer simulation runs in order to determine which factors have significant influence on results
    • …
    corecore