1,233 research outputs found
The Stable Roommates problem with short lists
We consider two variants of the classical Stable Roommates problem with
Incomplete (but strictly ordered) preference lists SRI that are degree
constrained, i.e., preference lists are of bounded length. The first variant,
EGAL d-SRI, involves finding an egalitarian stable matching in solvable
instances of SRI with preference lists of length at most d. We show that this
problem is NP-hard even if d=3. On the positive side we give a
(2d+3)/7-approximation algorithm for d={3,4,5} which improves on the known
bound of 2 for the unbounded preference list case. In the second variant of
SRI, called d-SRTI, preference lists can include ties and are of length at most
d. We show that the problem of deciding whether an instance of d-SRTI admits a
stable matching is NP-complete even if d=3. We also consider the "most stable"
version of this problem and prove a strong inapproximability bound for the d=3
case. However for d=2 we show that the latter problem can be solved in
polynomial time.Comment: short version appeared at SAGT 201
Manipulating Tournaments in Cup and Round Robin Competitions
In sports competitions, teams can manipulate the result by, for instance,
throwing games. We show that we can decide how to manipulate round robin and
cup competitions, two of the most popular types of sporting competitions in
polynomial time. In addition, we show that finding the minimal number of games
that need to be thrown to manipulate the result can also be determined in
polynomial time. Finally, we show that there are several different variations
of standard cup competitions where manipulation remains polynomial.Comment: Proceedings of Algorithmic Decision Theory, First International
Conference, ADT 2009, Venice, Italy, October 20-23, 200
Local search for stable marriage problems with ties and incomplete lists
The stable marriage problem has a wide variety of practical applications,
ranging from matching resident doctors to hospitals, to matching students to
schools, or more generally to any two-sided market. We consider a useful
variation of the stable marriage problem, where the men and women express their
preferences using a preference list with ties over a subset of the members of
the other sex. Matchings are permitted only with people who appear in these
preference lists. In this setting, we study the problem of finding a stable
matching that marries as many people as possible. Stability is an envy-free
notion: no man and woman who are not married to each other would both prefer
each other to their partners or to being single. This problem is NP-hard. We
tackle this problem using local search, exploiting properties of the problem to
reduce the size of the neighborhood and to make local moves efficiently.
Experimental results show that this approach is able to solve large problems,
quickly returning stable matchings of large and often optimal size.Comment: 12 pages, Proc. PRICAI 2010 (11th Pacific Rim International
Conference on Artificial Intelligence), Byoung-Tak Zhang and Mehmet A. Orgun
eds., Springer LNA
Coreference detection of low quality objects
The problem of record linkage is a widely studied problem that aims to identify coreferent (i.e. duplicate) data in a structured data source. As indicated by Winkler, a solution to the record linkage problem is only possible if the error rate is sufficiently low. In other words, in order to succesfully deduplicate a database, the objects in the database must be of sufficient quality. However, this assumption is not always feasible. In this paper, it is investigated how merging of low quality objects into one high quality object can improve the process of record linkage. This general idea is illustrated in the context of strings comparison, where strings of low quality (i.e. with a high typographical error rate) are merged into a string of high quality by using an n-dimensional Levenshtein distance matrix and compute the optimal alignment between the dirty strings. Results are presented and possible refinements are proposed
Wiretapping a hidden network
We consider the problem of maximizing the probability of hitting a
strategically chosen hidden virtual network by placing a wiretap on a single
link of a communication network. This can be seen as a two-player win-lose
(zero-sum) game that we call the wiretap game. The value of this game is the
greatest probability that the wiretapper can secure for hitting the virtual
network. The value is shown to equal the reciprocal of the strength of the
underlying graph.
We efficiently compute a unique partition of the edges of the graph, called
the prime-partition, and find the set of pure strategies of the hider that are
best responses against every maxmin strategy of the wiretapper. Using these
special pure strategies of the hider, which we call
omni-connected-spanning-subgraphs, we define a partial order on the elements of
the prime-partition. From the partial order, we obtain a linear number of
simple two-variable inequalities that define the maxmin-polytope, and a
characterization of its extreme points.
Our definition of the partial order allows us to find all equilibrium
strategies of the wiretapper that minimize the number of pure best responses of
the hider. Among these strategies, we efficiently compute the unique strategy
that maximizes the least punishment that the hider incurs for playing a pure
strategy that is not a best response. Finally, we show that this unique
strategy is the nucleolus of the recently studied simple cooperative spanning
connectivity game
What Affects Social Attention? Social Presence, Eye Contact and Autistic Traits
Social understanding is facilitated by effectively attending to other people and the subtle social cues they generate. In order to more fully appreciate the nature of social attention and what drives people to attend to social aspects of the world, one must investigate the factors that influence social attention. This is especially important when attempting to create models of disordered social attention, e.g. a model of social attention in autism. Here we analysed participants' viewing behaviour during one-to-one social interactions with an experimenter. Interactions were conducted either live or via video (social presence manipulation). The participant was asked and then required to answer questions. Experimenter eye-contact was either direct or averted. Additionally, the influence of participant self-reported autistic traits was also investigated. We found that regardless of whether the interaction was conducted live or via a video, participants frequently looked at the experimenter's face, and they did this more often when being asked a question than when answering. Critical differences in social attention between the live and video interactions were also observed. Modifications of experimenter eye contact influenced participants' eye movements in the live interaction only; and increased autistic traits were associated with less looking at the experimenter for video interactions only. We conclude that analysing patterns of eye-movements in response to strictly controlled video stimuli and natural real-world stimuli furthers the field's understanding of the factors that influence social attention. © 2013 Freeth et al
Construct, Merge, Solve and Adapt: Application to the repetition-free longest common subsequence problem
In this paper we present the application of a recently proposed, general, algorithm for combinatorial optimization to the repetition-free longest common subsequence problem. The applied algorithm, which is labelled Construct, Merge, Solve & Adapt, generates sub-instances based on merging the solution components found in randomly constructed solutions. These sub-instances are subsequently solved by means of an exact solver. Moreover, the considered sub-instances are dynamically changing due to adding new solution components at each iteration, and removing existing solution components on the basis of indicators about their usefulness. The results of applying this algorithm to the repetition-free longest common subsequence problem show that the algorithm generally outperforms competing approaches from the literature. Moreover, they show that the algorithm is competitive with CPLEX for small and medium size problem instances, whereas it outperforms CPLEX for larger problem instances.Peer ReviewedPostprint (author's final draft
Profile-Based Optimal Matchings in the Student-Project Allocation Problem
In the Student/Project Allocation problem (spa) we seek to assign students to individual or group projects offered by lecturers. Students provide a list of projects they find acceptable in order of preference. Each student can be assigned to at most one project and there are constraints on the maximum number of students that can be assigned to each project and lecturer. We seek matchings of students to projects that are optimal with respect to profile, which is a vector whose rth component indicates how many students have their rth-choice project. We present an efficient algorithm for finding agreedy maximum matching in the spa context – this is a maximum matching whose profile is lexicographically maximum. We then show how to adapt this algorithm to find a generous maximum matching – this is a matching whose reverse profile is lexicographically minimum. Our algorithms involve finding optimal flows in networks. We demonstrate how this approach can allow for additional constraints, such as lecturer lower quotas, to be handled flexibly
A Minimal Periods Algorithm with Applications
Kosaraju in ``Computation of squares in a string'' briefly described a
linear-time algorithm for computing the minimal squares starting at each
position in a word. Using the same construction of suffix trees, we generalize
his result and describe in detail how to compute in O(k|w|)-time the minimal
k-th power, with period of length larger than s, starting at each position in a
word w for arbitrary exponent and integer . We provide the
complete proof of correctness of the algorithm, which is somehow not completely
clear in Kosaraju's original paper. The algorithm can be used as a sub-routine
to detect certain types of pseudo-patterns in words, which is our original
intention to study the generalization.Comment: 14 page
- …
