2,570 research outputs found
Valuation equilibrium
We introduce a new solution concept for games in extensive form with perfect information, valuation equilibrium, which is based on a partition of each player's moves into similarity classes. A valuation of a player'is a real-valued function on the set of her similarity classes. In this equilibrium each player's strategy is optimal in the sense that at each of her nodes, a player chooses a move that belongs to a class with maximum valuation. The valuation of each player is consistent with the strategy profile in the sense that the valuation of a similarity class is the player's expected payoff, given that the path (induced by the strategy profile) intersects the similarity class. The solution concept is applied to decision problems and multi-player extensive form games. It is contrasted with existing solution concepts. The valuation approach is next applied to stopping games, in which non-terminal moves form a single similarity class, and we note that the behaviors obtained echo some biases observed experimentally. Finally, we tentatively suggest a way of endogenizing the similarity partitions in which moves are categorized according to how well they perform relative to the expected equilibrium value, interpreted as the aspiration level
SrKZnMnAs: a ferromagnetic semiconductor with colossal magnetoresistance
A bulk diluted magnetic semiconductor (Sr,K)(Zn,Mn)As was
synthesized with decoupled charge and spin doping. It has a hexagonal
CaAlSi-type structure with the (Zn,Mn)As layer forming
a honeycomb-like network. Magnetization measurements show that the sample
undergoes a ferromagnetic transition with a Curie temperature of 12 K and
\revision{magnetic moment reaches about 1.5 /Mn under = 5 T
and = 2 K}. Surprisingly, a colossal negative magnetoresistance, defined as
, up to 38\% under a low field of = 0.1
T and to 99.8\% under = 5 T, was observed at = 2 K. The
colossal magnetoresistance can be explained based on the Anderson localization
theory.Comment: Accepted for publication in EP
Treating Homeless Opioid Dependent Patients with Buprenorphine in an Office-Based Setting
CONTEXT
Although office-based opioid treatment with buprenorphine (OBOT-B) has been successfully implemented in primary care settings in the US, its use has not been reported in homeless patients.
OBJECTIVE
To characterize the feasibility of OBOT-B in homeless relative to housed patients.
DESIGN
A retrospective record review examining treatment failure, drug use, utilization of substance abuse treatment services, and intensity of clinical support by a nurse care manager (NCM) among homeless and housed patients in an OBOT-B program between August 2003 and October 2004. Treatment failure was defined as elopement before completing medication induction, discharge after medication induction due to ongoing drug use with concurrent nonadherence with intensified treatment, or discharge due to disruptive behavior.
RESULTS
Of 44 homeless and 41 housed patients enrolled over 12 months, homeless patients were more likely to be older, nonwhite, unemployed, infected with HIV and hepatitis C, and report a psychiatric illness. Homeless patients had fewer social supports and more chronic substance abuse histories with a 3- to 6-fold greater number of years of drug use, number of detoxification attempts and percentage with a history of methadone maintenance treatment. The proportion of subjects with treatment failure for the homeless (21%) and housed (22%) did not differ (P=.94). At 12 months, both groups had similar proportions with illicit opioid use [Odds ratio (OR), 0.9 (95% CI, 0.5–1.7) P=.8], utilization of counseling (homeless, 46%; housed, 49%; P=.95), and participation in mutual-help groups (homeless, 25%; housed, 29%; P=.96). At 12 months, 36% of the homeless group was no longer homeless. During the first month of treatment, homeless patients required more clinical support from the NCM than housed patients.
CONCLUSIONS
Despite homeless opioid dependent patients' social instability, greater comorbidities, and more chronic drug use, office-based opioid treatment with buprenorphine was effectively implemented in this population comparable to outcomes in housed patients with respect to treatment failure, illicit opioid use, and utilization of substance abuse treatment
Multi-Step Processing of Spatial Joins
Spatial joins are one of the most important operations for combining spatial objects of several relations. In this paper, spatial join processing is studied in detail for extended spatial objects in twodimensional data space. We present an approach for spatial join processing that is based on three steps. First, a spatial join is performed on the minimum bounding rectangles of the objects returning a set of candidates. Various approaches for accelerating this step of join processing have been examined at the last year’s conference [BKS 93a]. In this paper, we focus on the problem how to compute the answers from the set of candidates which is handled by
the following two steps. First of all, sophisticated approximations
are used to identify answers as well as to filter out false hits from
the set of candidates. For this purpose, we investigate various types
of conservative and progressive approximations. In the last step, the
exact geometry of the remaining candidates has to be tested against
the join predicate. The time required for computing spatial join
predicates can essentially be reduced when objects are adequately
organized in main memory. In our approach, objects are first decomposed
into simple components which are exclusively organized
by a main-memory resident spatial data structure. Overall, we
present a complete approach of spatial join processing on complex
spatial objects. The performance of the individual steps of our approach
is evaluated with data sets from real cartographic applications.
The results show that our approach reduces the total execution
time of the spatial join by factors
Querying Probabilistic Neighborhoods in Spatial Data Sets Efficiently
In this paper we define the notion
of a probabilistic neighborhood in spatial data: Let a set of points in
, a query point , a distance metric \dist,
and a monotonically decreasing function be
given. Then a point belongs to the probabilistic neighborhood of with respect to with probability f(\dist(p,q)). We envision
applications in facility location, sensor networks, and other scenarios where a
connection between two entities becomes less likely with increasing distance. A
straightforward query algorithm would determine a probabilistic neighborhood in
time by probing each point in .
To answer the query in sublinear time for the planar case, we augment a
quadtree suitably and design a corresponding query algorithm. Our theoretical
analysis shows that -- for certain distributions of planar -- our algorithm
answers a query in time with high probability
(whp). This matches up to a logarithmic factor the cost induced by
quadtree-based algorithms for deterministic queries and is asymptotically
faster than the straightforward approach whenever .
As practical proofs of concept we use two applications, one in the Euclidean
and one in the hyperbolic plane. In particular, our results yield the first
generator for random hyperbolic graphs with arbitrary temperatures in
subquadratic time. Moreover, our experimental data show the usefulness of our
algorithm even if the point distribution is unknown or not uniform: The running
time savings over the pairwise probing approach constitute at least one order
of magnitude already for a modest number of points and queries.Comment: The final publication is available at Springer via
http://dx.doi.org/10.1007/978-3-319-44543-4_3
A well-separated pairs decomposition algorithm for k-d trees implemented on multi-core architectures
Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.Variations of k-d trees represent a fundamental data structure used in Computational Geometry with numerous applications in science. For example particle track tting in the software of the LHC experiments, and in simulations of N-body systems in the study of dynamics of interacting galaxies, particle beam physics, and molecular dynamics in biochemistry. The many-body tree methods devised by Barnes and Hutt in the 1980s and the Fast Multipole Method introduced in 1987 by Greengard and Rokhlin use variants of k-d trees to reduce the computation time upper bounds to O(n log n) and even O(n) from O(n2). We present an algorithm that uses the principle of well-separated pairs decomposition to always produce compressed trees in O(n log n) work. We present and evaluate parallel implementations for the algorithm that can take advantage of multi-core architectures.The Science and Technology Facilities Council, UK
Sampling-based Algorithms for Optimal Motion Planning
During the last decade, sampling-based path planning algorithms, such as
Probabilistic RoadMaps (PRM) and Rapidly-exploring Random Trees (RRT), have
been shown to work well in practice and possess theoretical guarantees such as
probabilistic completeness. However, little effort has been devoted to the
formal analysis of the quality of the solution returned by such algorithms,
e.g., as a function of the number of samples. The purpose of this paper is to
fill this gap, by rigorously analyzing the asymptotic behavior of the cost of
the solution returned by stochastic sampling-based algorithms as the number of
samples increases. A number of negative results are provided, characterizing
existing algorithms, e.g., showing that, under mild technical conditions, the
cost of the solution returned by broadly used sampling-based algorithms
converges almost surely to a non-optimal value. The main contribution of the
paper is the introduction of new algorithms, namely, PRM* and RRT*, which are
provably asymptotically optimal, i.e., such that the cost of the returned
solution converges almost surely to the optimum. Moreover, it is shown that the
computational complexity of the new algorithms is within a constant factor of
that of their probabilistically complete (but not asymptotically optimal)
counterparts. The analysis in this paper hinges on novel connections between
stochastic sampling-based path planning algorithms and the theory of random
geometric graphs.Comment: 76 pages, 26 figures, to appear in International Journal of Robotics
Researc
HIV Prevention in Care and Treatment Settings: Baseline Risk Behaviors among HIV Patients in Kenya, Namibia, and Tanzania.
HIV care and treatment settings provide an opportunity to reach people living with HIV/AIDS (PLHIV) with prevention messages and services. Population-based surveys in sub-Saharan Africa have identified HIV risk behaviors among PLHIV, yet data are limited regarding HIV risk behaviors of PLHIV in clinical care. This paper describes the baseline sociodemographic, HIV transmission risk behaviors, and clinical data of a study evaluating an HIV prevention intervention package for HIV care and treatment clinics in Africa. The study was a longitudinal group-randomized trial in 9 intervention clinics and 9 comparison clinics in Kenya, Namibia, and Tanzania (N = 3538). Baseline participants were mostly female, married, had less than a primary education, and were relatively recently diagnosed with HIV. Fifty-two percent of participants had a partner of negative or unknown status, 24% were not using condoms consistently, and 11% reported STI symptoms in the last 6 months. There were differences in demographic and HIV transmission risk variables by country, indicating the need to consider local context in designing studies and using caution when generalizing findings across African countries. Baseline data from this study indicate that participants were often engaging in HIV transmission risk behaviors, which supports the need for prevention with PLHIV (PwP). TRIAL REGISTRATION: ClinicalTrials.gov NCT01256463
A distance for partially labeled trees
In a number of practical situations, data have structure and the relations among its component parts need to be coded with suitable data models. Trees are usually utilized for representing data for which hierarchical relations can be defined. This is the case in a number of fields like image analysis, natural language processing, protein structure, or music retrieval, to name a few. In those cases, procedures for comparing trees are very relevant. An approximate tree edit distance algorithm has been introduced for working with trees labeled only at the leaves. In this paper, it has been applied to handwritten character recognition, providing accuracies comparable to those by the most comprehensive search method, being as efficient as the fastest.This work is supported by the Spanish Ministry projects DRIMS (TIN2009-14247-C02), and Consolider Ingenio 2010 (MIPRCV, CSD2007-00018), partially supported by EU ERDF and the Pascal Network of Excellence
- …
