92,344 research outputs found
Recommended from our members
The impact of financial histories on individuals and societies: A replication of and extension of Berg el al. (1995)
We replicate and extend the social history treatment of the Berg, Dickhaut, and McCabe (1995) investment game, to further document how the reporting of financial history influences how laboratory societies organize themselves over time. We replicate Berg et al. (1995) by conducting a No History and a Financial History session to determine whether a report summarizing the financial transactions of a previous experimental session will significantly reduce entropy in the amounts sent by Investors and returned by Stewards in the investment game, as Berg et al. (1995) found. We extend Berg et al. (1995) in two ways. First, we conduct a total of five sessions (one No History and four Financial History sessions). Second, we introduce Shannon's (1948) measure of entropy from information theory to assess whether the introduction of financial transaction history reduces the amount of dispersion in the amounts invested and returned across generations of players. Results across sessions indicate that entropy declined in both the amounts sent by Investors and the percentage returned by Stewards, but these patterns are weaker and mixed compared to those in the Berg et al. (1995) study. Additional research is needed to test how initial conditions, path dependencies, actors' strategic reasoning about others' behavior, multiple sessions, and communication may mediate the impact of financial history. The study's multiple successive Financial History sessions and entropy measure are new to the investment game literature
'BioNessie(G) - a grid enabled biochemical networks simulation environment
The simulation of biochemical networks provides insight and
understanding about the underlying biochemical processes and pathways
used by cells and organisms. BioNessie is a biochemical network simulator
which has been developed at the University of Glasgow. This paper
describes the simulator and focuses in particular on how it has been
extended to benefit from a wide variety of high performance compute resources
across the UK through Grid technologies to support larger scale
simulations
Hybrid Iterative Multiuser Detection for Channel Coded Space Division Multiple Access OFDM Systems
Space division multiple access (SDMA) aided orthogonal frequency division multiplexing (OFDM) systems assisted by efficient multiuser detection (MUD) techniques have recently attracted intensive research interests. The maximum likelihood detection (MLD) arrangement was found to attain the best performance, although this was achieved at the cost of a computational complexity, which increases exponentially both with the number of users and with the number of bits per symbol transmitted by higher order modulation schemes. By contrast, the minimum mean-square error (MMSE) SDMA-MUD exhibits a lower complexity at the cost of a performance loss. Forward error correction (FEC) schemes such as, for example, turbo trellis coded modulation (TTCM), may be efficiently combined with SDMA-OFDM systems for the sake of improving the achievable performance. Genetic algorithm (GA) based multiuser detection techniques have been shown to provide a good performance in MUD-aided code division multiple access (CDMA) systems. In this contribution, a GA-aided MMSE MUD is proposed for employment in a TTCM assisted SDMA-OFDM system, which is capable of achieving a similar performance to that attained by its optimum MLD-aided counterpart at a significantly lower complexity, especially at high user loads. Moreover, when the proposed biased Q-function based mutation (BQM) assisted iterative GA (IGA) MUD is employed, the GA-aided system’s performance can be further improved, for example, by reducing the bit error ratio (BER) measured at 3 dB by about five orders of magnitude in comparison to the TTCM assisted MMSE-SDMA-OFDM benchmarker system, while still maintaining modest complexity
Effect of Statistical Fluctuation in Monte Carlo Based Photon Beam Dose Calculation on Gamma Index Evaluation
The gamma-index test has been commonly adopted to quantify the degree of
agreement between a reference dose distribution and an evaluation dose
distribution. Monte Carlo (MC) simulation has been widely used for the
radiotherapy dose calculation for both clinical and research purposes. The goal
of this work is to investigate both theoretically and experimentally the impact
of the MC statistical fluctuation on the gamma-index test when the fluctuation
exists in the reference, the evaluation, or both dose distributions. To the
first order approximation, we theoretically demonstrated in a simplified model
that the statistical fluctuation tends to overestimate gamma-index values when
existing in the reference dose distribution and underestimate gamma-index
values when existing in the evaluation dose distribution given the original
gamma-index is relatively large for the statistical fluctuation. Our numerical
experiments using clinical photon radiation therapy cases have shown that 1)
when performing a gamma-index test between an MC reference dose and a non-MC
evaluation dose, the average gamma-index is overestimated and the passing rate
decreases with the increase of the noise level in the reference dose; 2) when
performing a gamma-index test between a non-MC reference dose and an MC
evaluation dose, the average gamma-index is underestimated when they are within
the clinically relevant range and the passing rate increases with the increase
of the noise level in the evaluation dose; 3) when performing a gamma-index
test between an MC reference dose and an MC evaluation dose, the passing rate
is overestimated due to the noise in the evaluation dose and underestimated due
to the noise in the reference dose. We conclude that the gamma-index test
should be used with caution when comparing dose distributions computed with
Monte Carlo simulation
- …
