2,857 research outputs found
Trust based collaborative filtering
k-nearest neighbour (kNN) collaborative filtering (CF), the widely successful
algorithm supporting recommender systems, attempts to relieve the problem
of information overload by generating predicted ratings for items users have not
expressed their opinions about; to do so, each predicted rating is computed based
on ratings given by like-minded individuals. Like-mindedness, or similarity-based
recommendation, is the cause of a variety of problems that plague recommender
systems. An alternative view of the problem, based on trust, offers the potential to
address many of the previous limiations in CF. In this work we present a varation of
kNN, the trusted k-nearest recommenders (or kNR) algorithm, which allows users
to learn who and how much to trust one another by evaluating the utility of the rating
information they receive. This method redefines the way CF is performed, and
while avoiding some of the pitfalls that similarity-based CF is prone to, outperforms
the basic similarity-based methods in terms of prediction accuracy
The Impact of Simple Institutions in Experimental Economies with Poverty Traps
We introduce an experimental approach to study the effect of institutions on economic growth. In
each period, agents produce and trade output in a market, and allocate it to consumption and
investment. Productivity is higher if total capital stock is above a threshold. The threshold externality
generates two steady states – a suboptimal poverty trap and an optimal steady state. In a baseline
treatment, the economies converge to the poverty trap. However, the ability to make public
announcements or to vote on competing and binding policies, increases output, welfare and capital
stock. Combining these two simple institutions guarantees that the economies escape the poverty
trap
Physikit: Data Engagement Through Physical Ambient Visualizations in the Home
Internet of things (IoT) devices and sensor kits have the potential to democratize the access, use, and appropriation of data. Despite the increased availability of low cost sensors, most of the produced data is "black box" in nature: users often do not know how to access or interpret data. We propose a "human-data design" approach in which end-users are given tools to create, share, and use data through tangible and physical visualizations. This paper introduces Physikit, a system designed to allow users to explore and engage with environmental data through physical ambient visualizations. We report on the design and implementation of Physikit, and present a two-week field study which showed that participants got an increased sense of the meaning of data, embellished and appropriated the basic visualizations to make them blend into their homes, and used the visualizations as a probe for community engagement and social behavior
Engaging without over-powering: A case study of a FLOSS project
This is the post-print version of the published chapter. The original publication is available at the link below. Copyright @ 2010 IFIP International Federation for Information Processing.The role of Open Source Software (OSS) in the e-learning business has become more and more fundamental in the last 10 years, as long as corporate and government organizations have developed their educational and training programs based on OSS out-of-the-box tools. This paper qualitatively documents the decision of the largest UK e-learning provider, the Open University, to adopt the Moodle e-learning system, and how it has been successfully deployed in its site after a multi-million investment. A further quantitative study also provides evidence of how a commercial stakeholder has been engaged with, and produced outputs for, the Moodle community. Lessons learned from this experience by the stakeholders include the crucial factors of contributing to the OSS community, and adapting to an evolving technology. It also becomes evident how commercial partners helped this OSS system to achieve the transition from an “average” OSS system to a successful multi-site, collaborative and community-based OSS project
A Model-Based Analysis of GC-Biased Gene Conversion in the Human and Chimpanzee Genomes
GC-biased gene conversion (gBGC) is a recombination-associated process that favors the fixation of G/C alleles over A/T alleles. In mammals, gBGC is hypothesized to contribute to variation in GC content, rapidly evolving sequences, and the fixation of deleterious mutations, but its prevalence and general functional consequences remain poorly understood. gBGC is difficult to incorporate into models of molecular evolution and so far has primarily been studied using summary statistics from genomic comparisons. Here, we introduce a new probabilistic model that captures the joint effects of natural selection and gBGC on nucleotide substitution patterns, while allowing for correlations along the genome in these effects. We implemented our model in a computer program, called phastBias, that can accurately detect gBGC tracts about 1 kilobase or longer in simulated sequence alignments. When applied to real primate genome sequences, phastBias predicts gBGC tracts that cover roughly 0.3% of the human and chimpanzee genomes and account for 1.2% of human-chimpanzee nucleotide differences. These tracts fall in clusters, particularly in subtelomeric regions; they are enriched for recombination hotspots and fast-evolving sequences; and they display an ongoing fixation preference for G and C alleles. They are also significantly enriched for disease-associated polymorphisms, suggesting that they contribute to the fixation of deleterious alleles. The gBGC tracts provide a unique window into historical recombination processes along the human and chimpanzee lineages. They supply additional evidence of long-term conservation of megabase-scale recombination rates accompanied by rapid turnover of hotspots. Together, these findings shed new light on the evolutionary, functional, and disease implications of gBGC. The phastBias program and our predicted tracts are freely available. © 2013 Capra et al
Can processes make relationships work? The Triple Helix between structure and action
This contribution seeks to explore how complex adaptive theory can be applied at the conceptual level to unpack Triple Helix models. We use two cases to examine this issue – the Finnish Strategic Centres for Science, Technology & Innovation (SHOKs) and the Canadian Business-led Networks of Centres of Excellence (BL-NCE). Both types of centres are organisational structures that aspire to be business-led, with a considerable portion of their activities driven by (industrial) users’ interests and requirements. Reflecting on the centres’ activities along three dimensions – knowledge generation, consensus building and innovation – we contend that conceptualising the Triple Helix from a process perspective will improve the dialogue between stakeholders and shareholders
What makes re-finding information difficult? A study of email re-finding
Re-nding information that has been seen or accessed before is a task which can be relatively straight-forward, but often it can be extremely challenging, time-consuming and frustrating. Little is known, however, about what makes one re-finding task harder or easier than another. We performed a user study to learn about the contextual factors that influence users' perception of task diculty in the context of re-finding email messages. 21 participants were issued re-nding tasks to perform on their own personal collections. The participants' responses to questions about the tasks combined with demographic data and collection statistics for the experimental population provide a rich basis to investigate the variables that can influence the perception of diculty. A logistic regression model was developed to examine the relationships be- tween variables and determine whether any factors were associated with perceived task diculty. The model reveals strong relationships between diculty and the time lapsed since a message was read, remembering when the sought-after email was sent, remembering other recipients of the email, the experience of the user and the user's ling strategy. We discuss what these findings mean for the design of re-nding interfaces and future re-finding research
J D Bernal: philosophy, politics and the science of science
This paper is an examination of the philosophical and political legacy of John Desmond Bernal. It addresses the evidence of an emerging consensus on Bernal based on the recent biography of Bernal by Andrew Brown and the reviews it has received. It takes issue with this view of Bernal, which tends to be admiring of his scientific contribution, bemused by his sexuality, condescending to his philosophy and hostile to his politics. This article is a critical defence of his philosophical and political position
J D Bernal: philosophy, politics and the science of science
This paper is an examination of the philosophical and political legacy of John Desmond Bernal. It addresses the evidence of an emerging consensus on Bernal based on the recent biography of Bernal by Andrew Brown and the reviews it has received. It takes issue with this view of Bernal, which tends to be admiring of his scientific contribution, bemused by his sexuality, condescending to his philosophy and hostile to his politics. This article is a critical defence of his philosophical and political position
- …
