4,711 research outputs found
location privacy based on trusted computing and secure logging
Many operators of cellphone networks now offer locationbased services to their customers, whereby an operator often outsources service provisioning to a third-party provider. Since a person’s location could reveal sensitive information about the person, the operator must ensure that the service provider processes location information about the operator’s customers in a privacy-preserving way. So far, this assurance has been based on a legal contract between the operator and the provider. However, there has been no technical mechanism that lets the operator verify whether the provider adheres to the privacy policy outlined in the contract. We propose an architecture for location-based services based on Trusted Computing and Secure Logging that provides such a technical mechanism. Trusted Computing lets an operator query the configuration of a location-based service. The operator will hand over location information to the service only if the service is configured such that the service provider cannot get access to location information using software-based attacks. This includes passive attacks, where the provider monitors information flowing into and out of its service, and active attacks, where the provider modifies or injects customer queries to the service. We introduce several requirements that must be satisfied by a location-based service to defend against passive attacks. Furthermore, we present Secure Logging, an auditing mechanism to defend against active attacks
Randomness in Competitions
We study the effects of randomness on competitions based on an elementary
random process in which there is a finite probability that a weaker team upsets
a stronger team. We apply this model to sports leagues and sports tournaments,
and compare the theoretical results with empirical data. Our model shows that
single-elimination tournaments are efficient but unfair: the number of games is
proportional to the number of teams N, but the probability that the weakest
team wins decays only algebraically with N. In contrast, leagues, where every
team plays every other team, are fair but inefficient: the top of
teams remain in contention for the championship, while the probability that the
weakest team becomes champion is exponentially small. We also propose a gradual
elimination schedule that consists of a preliminary round and a championship
round. Initially, teams play a small number of preliminary games, and
subsequently, a few teams qualify for the championship round. This algorithm is
fair and efficient: the best team wins with a high probability and the number
of games scales as , whereas traditional leagues require N^3 games to
fairly determine a champion.Comment: 10 pages, 8 figures, reviews arXiv:physics/0512144,
arXiv:physics/0608007, arXiv:cond-mat/0607694, arXiv:physics/061221
Recommended from our members
Efficacy of new-generation antidepressants assessed with the Montgomery-Asberg depression rating scale, the gold standard clinician rating scale : a meta-analysis of randomised placebo-controlled trials
It has been claimed that efficacy estimates based on the Hamilton Depression Rating-Scale (HDRS) underestimate antidepressants true treatment effects due to the instrument's poor psychometric properties. The aim of this study is to compare efficacy estimates based on the HDRS with the gold standard procedure, the Montgomery-Asberg Depression Rating-Scale (MADRS)
Methodological flaws, conflicts of interest, and scientific fallacies : implications for the evaluation of antidepressants’ efficacy and harm
BackgroundIn current psychiatric practice, antidepressants are widely and with ever-increasing frequency prescribed to patients. However, several scientific biases obfuscate estimates of antidepressants’ efficacy and harm, and these are barely recognized in treatment guidelines. The aim of this mini-review is to critically evaluate the efficacy and harm of antidepressants for acute and maintenance treatment with respect to systematic biases related to industry funding and trial methodology.MethodsNarrative review based on a comprehensive search of the literature.ResultsIt is shown that the pooled efficacy of antidepressants is weak and below the threshold of a minimally clinically important change once publication and reporting biases are considered. Moreover, the small mean difference in symptom reductions relative to placebo is possibly attributable to observer effects in unblinded assessors and patient expectancies. With respect to trial dropout rates, a hard outcome not subjected to observer bias, no difference was observed between antidepressants and placebo. The discontinuation trials on the efficacy of antidepressants in maintenance therapy are systematically flawed, because in these studies, spontaneous remitters are excluded, whereas half of all patients who remitted on antidepressants are abruptly switched to placebo. This can cause a severe withdrawal syndrome that is easily misdiagnosed as a relapse when assessed on subjective symptom rating scales. In accordance, the findings of naturalistic long-term studies suggest that maintenance therapy has no clear benefit, and non-drug users do not show increased recurrence rates. Moreover, a growing body of evidence from hundreds of randomized controlled trials suggests that antidepressants cause suicidality, but this risk is underestimated because data from industry-funded trials are systematically flawed. Unselected, population-wide observational studies indicate that depressive patients who use antidepressants are at an increased risk of suicide and that they have a higher rate of all-cause mortality than matched controls.ConclusionThe strong reliance on industry-funded research results in an uncritical approval of antidepressants. Due to several flaws such as publication and reporting bias, unblinding of outcome assessors, concealment and recoding of serious adverse events, the efficacy of antidepressants is systematically overestimated, and harm is systematically underestimated. Therefore, I conclude that antidepressants are largely ineffective and potentially harmful
A universal method for automated gene mapping
Small insertions or deletions (InDels) constitute a ubiquituous class of sequence polymorphisms found in eukaryotic genomes. Here, we present an automated high-throughput genotyping method that relies on the detection of fragment-length polymorphisms (FLPs) caused by InDels. The protocol utilizes standard sequencers and genotyping software. We have established genome-wide FLP maps for both Caenorhabditis elegans and Drosophila melanogaster that facilitate genetic mapping with a minimum of manual input and at comparatively low cost
- …
