31,721 research outputs found
An Evolutionary Game Theoretic Model of Rhino Horn Devaluation
Rhino populations are at a critical level due to the demand for rhino horn
and the subsequent poaching. Wildlife managers attempt to secure rhinos with
approaches to devalue the horn, the most common of which is dehorning. Game
theory has been used to examine the interaction of poachers and wildlife
managers where a manager can either `dehorn' their rhinos or leave the horn
attached and poachers may behave `selectively' or `indiscriminately'. The
approach described in this paper builds on this previous work and investigates
the interactions between the poachers. We build an evolutionary game theoretic
model and determine which strategy is preferred by a poacher in various
different populations of poachers. The purpose of this work is to discover
whether conditions which encourage the poachers to behave selectively exist,
that is, they only kill those rhinos with full horns.
The analytical results show that full devaluation of all rhinos will likely
lead to indiscriminate poaching. In turn it shows that devaluing of rhinos can
only be effective when implemented along with a strong disincentive framework.
This paper aims to contribute to the necessary research required for informed
discussion about the lively debate on legalising rhino horn trade
Higher Spin Alternating Sign Matrices
We define a higher spin alternating sign matrix to be an integer-entry square
matrix in which, for a nonnegative integer r, all complete row and column sums
are r, and all partial row and column sums extending from each end of the row
or column are nonnegative. Such matrices correspond to configurations of spin
r/2 statistical mechanical vertex models with domain-wall boundary conditions.
The case r=1 gives standard alternating sign matrices, while the case in which
all matrix entries are nonnegative gives semimagic squares. We show that the
higher spin alternating sign matrices of size n are the integer points of the
r-th dilate of an integral convex polytope of dimension (n-1)^2 whose vertices
are the standard alternating sign matrices of size n. It then follows that, for
fixed n, these matrices are enumerated by an Ehrhart polynomial in r.Comment: 41 pages; v2: minor change
Using a theory of mind to find best responses to memory-one strategies
Memory-one strategies are a set of Iterated Prisoner's Dilemma strategies
that have been praised for their mathematical tractability and performance
against single opponents. This manuscript investigates best response memory-one
strategies with a theory of mind for their opponents. The results add to the
literature that has shown that extortionate play is not always optimal by
showing that optimal play is often not extortionate. They also provide evidence
that memory-one strategies suffer from their limited memory in multi agent
interactions and can be out performed by optimised strategies with longer
memory. We have developed a theory that has allowed to explore the entire space
of memory-one strategies. The framework presented is suitable to study
memory-one strategies in the Prisoner's Dilemma, but also in evolutionary
processes such as the Moran process, Furthermore, results on the stability of
defection in populations of memory-one strategies are also obtained
An experimental evaluation of error seeding as a program validation technique
A previously reported experiment in error seeding as a program validation technique is summarized. The experiment was designed to test the validity of three assumptions on which the alleged effectiveness of error seeding is based. Errors were seeded into 17 functionally identical but independently programmed Pascal programs in such a way as to produce 408 programs, each with one seeded error. Using mean time to failure as a metric, results indicated that it is possible to generate seeded errors that are arbitrarily but not equally difficult to locate. Examination of indigenous errors demonstrated that these are also arbitrarily difficult to locate. These two results support the assumption that seeded and indigenous errors are approximately equally difficult to locate. However, the assumption that, for each type of error, all errors are equally difficult to locate was not borne out. Finally, since a seeded error occasionally corrected an indigenous error, the assumption that errors do not interfere with each other was proven wrong. Error seeding can be made useful by taking these results into account in modifying the underlying model
Steady-state distributions for models of bubbles: their existence and econometric implications
The purpose of this paper is to examine the properties of bubbles in the light of steady state results for threshold auto-regressive (TAR) models recently derived by Knight and Satchell (2011). We assert that this will have implications for econometrics. We study the conditions under which we can obtain a steady state distribution of asset prices using our simple model of bubbles based on our particular definition of a bubble. We derive general results and further extend the analysis by considering the steady state distribution in three cases of a (I) a normally distributed error process, (II) a non normally (exponentially) distributed steady-state process and (III) a switching random walk with a fairly general i.i.d error process We then examine the issues related to unit root testing for the presence of bubbles using standard econometric procedures. We illustrate as an example, the market for art, which shows distinctly bubble-like characteristics. Our results shed light on the ubiquitous finding of no bubbles in the econometric literature
Role of the microbiome, probiotics, and 'dysbiosis therapy' in critical illness.
Purpose of reviewLoss of 'health-promoting' microbes and overgrowth of pathogenic bacteria (dysbiosis) in ICU is believed to contribute to nosocomial infections, sepsis, and organ failure (multiple organ dysfunction syndrome). This review discusses new understanding of ICU dysbiosis, new data for probiotics and fecal transplantation in ICU, and new data characterizing the ICU microbiome.Recent findingsICU dysbiosis results from many factors, including ubiquitous antibiotic use and overuse. Despite advances in antibiotic therapy, infections and mortality from often multidrug-resistant organisms (i.e., Clostridium difficile) are increasing. This raises the question of whether restoration of a healthy microbiome via probiotics or other 'dysbiosis therapies' would be an optimal alternative, or parallel treatment option, to antibiotics. Recent clinical data demonstrate probiotics can reduce ICU infections and probiotics or fecal microbial transplant (FMT) can treat Clostridium difficile. This contributes to recommendations that probiotics should be considered to prevent infection in ICU. Unfortunately, significant clinical variability limits the strength of current recommendations and further large clinical trials of probiotics and FMT are needed. Before larger trials of 'dysbiosis therapy' can be thoughtfully undertaken, further characterization of ICU dysbiosis is needed. To addressing this, we conducted an initial analysis demonstrating a rapid and marked change from a 'healthy' microbiome to an often pathogen-dominant microbiota (dysbiosis) in a broad ICU population.SummaryA growing body of evidence suggests critical illness and ubiquitous antibiotic use leads to ICU dysbiosis that is associated with increased ICU infection, sepsis, and multiple organ dysfunction syndrome. Probiotics and FMT show promise as ICU therapies for infection. We hope future-targeted therapies using microbiome signatures can be developed to correct 'illness-promoting' dysbiosis to restore a healthy microbiome post-ICU to improve patient outcomes
Customer profitability analysis -- Part I: alternative approaches toward customer profitability
Critical mutation rate has an exponential dependence on population size in haploid and diploid populations
Understanding the effect of population size on the key parameters of evolution is particularly important for populations nearing extinction. There are evolutionary pressures to evolve sequences that are both fit and robust. At high mutation rates, individuals with greater mutational robustness can outcompete those with higher fitness. This is survival-of-the-flattest, and has been observed in digital organisms, theoretically, in simulated RNA evolution, and in RNA viruses. We introduce an algorithmic method capable of determining the relationship between population size, the critical mutation rate at which individuals with greater robustness to mutation are favoured over individuals with greater fitness, and the error threshold. Verification for this method is provided against analytical models for the error threshold. We show that the critical mutation rate for increasing haploid population sizes can be approximated by an exponential function, with much lower mutation rates tolerated by small populations. This is in contrast to previous studies which identified that critical mutation rate was independent of population size. The algorithm is extended to diploid populations in a system modelled on the biological process of meiosis. The results confirm that the relationship remains exponential, but show that both the critical mutation rate and error threshold are lower for diploids, rather than higher as might have been expected. Analyzing the transition from critical mutation rate to error threshold provides an improved definition of critical mutation rate. Natural populations with their numbers in decline can be expected to lose genetic material in line with the exponential model, accelerating and potentially irreversibly advancing their decline, and this could potentially affect extinction, recovery and population management strategy. The effect of population size is particularly strong in small populations with 100 individuals or less; the exponential model has significant potential in aiding population management to prevent local (and global) extinction events
- …
