657 research outputs found
Online Reputation Systems for the Health Sector
People who are seeking medical advice and care often find it difficult to obtain reliable information about the quality and competence of health service providers. While transparent quality evaluation of products and services is commonplace in most commercial services, public access to information about the quality of health services is usually very restricted. Online reputation and rating systems represent an emerging trend in decision support for service consumers. Reputation systems are based on collecting information about other parties in order to derive measures of their trustworthiness or reliability on various aspects. More specifically these systems use the Internet for the collection of ratings and for dissemination of derived reputation scores. Online rating systems applied to the health sector are already emerging. This article describes robust principles for implementing online reputation systems in the health sector. In order to prevent uncontrolled ratings, our method ensures that only genuine consumers of a specific service can rate that service. The advantage of using online reputation systems in the health sector is that it can assist consumers when deciding which health services to use, and that it gives an incentive for high quality health services among health service providers
Network-aware Evaluation Environment for Reputation Systems
Parties of reputation systems rate each other and use ratings to compute reputation scores that drive their interactions. When deciding which reputation model to deploy in a network environment, it is important to find the
most suitable model and to determine its right initial configuration. This calls for an engineering approach for describing, implementing and evaluating reputation
systems while taking into account specific aspects of both the reputation systems and the networked environment where they will run. We present a software tool (NEVER) for network-aware evaluation of reputation systems and their rapid prototyping through experiments performed according to user-specified parameters. To demonstrate effectiveness of NEVER, we analyse reputation models based on the beta distribution and the maximum likelihood estimation
Rational Trust Modeling
Trust models are widely used in various computer science disciplines. The
main purpose of a trust model is to continuously measure trustworthiness of a
set of entities based on their behaviors. In this article, the novel notion of
"rational trust modeling" is introduced by bridging trust management and game
theory. Note that trust models/reputation systems have been used in game theory
(e.g., repeated games) for a long time, however, game theory has not been
utilized in the process of trust model construction; this is where the novelty
of our approach comes from. In our proposed setting, the designer of a trust
model assumes that the players who intend to utilize the model are
rational/selfish, i.e., they decide to become trustworthy or untrustworthy
based on the utility that they can gain. In other words, the players are
incentivized (or penalized) by the model itself to act properly. The problem of
trust management can be then approached by game theoretical analyses and
solution concepts such as Nash equilibrium. Although rationality might be
built-in in some existing trust models, we intend to formalize the notion of
rational trust modeling from the designer's perspective. This approach will
result in two fascinating outcomes. First of all, the designer of a trust model
can incentivise trustworthiness in the first place by incorporating proper
parameters into the trust function, which can be later utilized among selfish
players in strategic trust-based interactions (e.g., e-commerce scenarios).
Furthermore, using a rational trust model, we can prevent many well-known
attacks on trust models. These two prominent properties also help us to predict
behavior of the players in subsequent steps by game theoretical analyses
A decidable policy language for history-based transaction monitoring
Online trading invariably involves dealings between strangers, so it is
important for one party to be able to judge objectively the trustworthiness of
the other. In such a setting, the decision to trust a user may sensibly be
based on that user's past behaviour. We introduce a specification language
based on linear temporal logic for expressing a policy for categorising the
behaviour patterns of a user depending on its transaction history. We also
present an algorithm for checking whether the transaction history obeys the
stated policy. To be useful in a real setting, such a language should allow one
to express realistic policies which may involve parameter quantification and
quantitative or statistical patterns. We introduce several extensions of linear
temporal logic to cater for such needs: a restricted form of universal and
existential quantification; arbitrary computable functions and relations in the
term language; and a "counting" quantifier for counting how many times a
formula holds in the past. We then show that model checking a transaction
history against a policy, which we call the history-based transaction
monitoring problem, is PSPACE-complete in the size of the policy formula and
the length of the history. The problem becomes decidable in polynomial time
when the policies are fixed. We also consider the problem of transaction
monitoring in the case where not all the parameters of actions are observable.
We formulate two such "partial observability" monitoring problems, and show
their decidability under certain restrictions
Stereotype reputation with limited observability
Assessing trust and reputation is essential in multi-agent systems where agents must decide who to interact with. Assessment typically relies on the direct experience of a trustor with a trustee agent, or on information from witnesses. Where direct or witness information is unavailable, such as when agent turnover is high, stereotypes learned from common traits and behaviour can provide this information. Such traits may be only partially or subjectively observed, with witnesses not observing traits of some trustees or interpreting their observations differently. Existing stereotype-based techniques are unable to account for such partial observability and subjectivity. In this paper we propose a method for extracting information from witness observations that enables stereotypes to be applied in partially and subjectively observable dynamic environments. Specifically, we present a mechanism for learning translations between observations made by trustor and witness agents with subjective interpretations of traits. We show through simulations that such translation is necessary for reliable reputation assessments in dynamic environments with partial and subjective observability
- …
