3,354 research outputs found
A language for information commerce processes
Automatizing information commerce requires languages to represent the typical information commerce processes. Existing languages and standards cover either only very specific types of business models or are too general to capture in a concise way the specific properties of information commerce processes. We introduce a language that is specifically designed for information commerce. It can be directly used for the implementation of the processes and communication required in information commerce. It allows to cover existing business models that are known either from standard proposals or existing information commerce applications on the Internet. The language has a concise logical semantics. In this paper we present the language concepts and an implementation architecture
Selection Bias in News Coverage: Learning it, Fighting it
News entities must select and filter the coverage they broadcast through
their respective channels since the set of world events is too large to be
treated exhaustively. The subjective nature of this filtering induces biases
due to, among other things, resource constraints, editorial guidelines,
ideological affinities, or even the fragmented nature of the information at a
journalist's disposal. The magnitude and direction of these biases are,
however, widely unknown. The absence of ground truth, the sheer size of the
event space, or the lack of an exhaustive set of absolute features to measure
make it difficult to observe the bias directly, to characterize the leaning's
nature and to factor it out to ensure a neutral coverage of the news. In this
work, we introduce a methodology to capture the latent structure of media's
decision process on a large scale. Our contribution is multi-fold. First, we
show media coverage to be predictable using personalization techniques, and
evaluate our approach on a large set of events collected from the GDELT
database. We then show that a personalized and parametrized approach not only
exhibits higher accuracy in coverage prediction, but also provides an
interpretable representation of the selection bias. Last, we propose a method
able to select a set of sources by leveraging the latent representation. These
selected sources provide a more diverse and egalitarian coverage, all while
retaining the most actively covered events
A Dynamic Embedding Model of the Media Landscape
Information about world events is disseminated through a wide variety of news
channels, each with specific considerations in the choice of their reporting.
Although the multiplicity of these outlets should ensure a variety of
viewpoints, recent reports suggest that the rising concentration of media
ownership may void this assumption. This observation motivates the study of the
impact of ownership on the global media landscape and its influence on the
coverage the actual viewer receives. To this end, the selection of reported
events has been shown to be informative about the high-level structure of the
news ecosystem. However, existing methods only provide a static view into an
inherently dynamic system, providing underperforming statistical models and
hindering our understanding of the media landscape as a whole.
In this work, we present a dynamic embedding method that learns to capture
the decision process of individual news sources in their selection of reported
events while also enabling the systematic detection of large-scale
transformations in the media landscape over prolonged periods of time. In an
experiment covering over 580M real-world event mentions, we show our approach
to outperform static embedding methods in predictive terms. We demonstrate the
potential of the method for news monitoring applications and investigative
journalism by shedding light on important changes in programming induced by
mergers and acquisitions, policy changes, or network-wide content diffusion.
These findings offer evidence of strong content convergence trends inside large
broadcasting groups, influencing the news ecosystem in a time of increasing
media ownership concentration
CrossFlow: Integrating Workflow Management and Electronic Commerce
The CrossFlow1 architecture provides support for cross-organisational workflow management in dynamically established virtual enterprises. The creation of a business relationship between a service provider organisation performing a service on behalf of a consumer organisation can be made dynamic when augmented by virtual market technology, the dynamic configuration of the contract enactment infrastructures, and the provision of fine grained service monitoring and control. Standard ways of describing services and contracts can be combined with matchmaking technology to create a virtual market for such service provision and consumption. A provider can then advertise its services in the market and consumers can search for a compatible business partner. This provides choice in selecting a partner and allows the deferment of the decision to a point in time where it can be made on the most up-to-date requirements of the consumer and service offers in the market. The penalty for deferred decision making is the time to set up the infrastructure in each organisation for the dynamically established contract. Thus, a further aspect of CrossFlow was to exploit the contract in the dynamic and automatic configuration of the contract enactment and supervision infrastructures of the respective organisations and in linking them in a dynamic fashion. The electronic contract, which results from the agreement between the newly established business partners, completely specifies the intended collaboration between them. Given the importance of the business process enacted by the provider, this includes fine-grained monitoring and control to allow tight co-operation between the organisations
CrossFlow: Cross-Organizational Workflow Management for Service Outsourcing in Dynamic Virtual Enterprises
In this report, we present the approach to cross-organizational workflow management of the CrossFlow project. CrossFlow is a European research project aiming at the support of cross-organizational workflows in dynamic virtual enterprises. The cooperation in these virtual enterprises is based on dynamic service outsourcing specified in electronic contracts. Service enactment is performed by dynamically linking the workflow management infrastructures of the involved organizations. Extended service enactment support is provided in the form of cross-organizational transaction management and process control, advanced quality of service monitoring, and support for high-level flexibility in service enactment. CrossFlow technology is realized on top of a commercial workflow management platform and applied in two real-world scenarios in the contexts of a logistics and an insurance company
webXice: an Infrastructure for Information Commerce on the WWW
Systems for information commerce on the WWW have to support flexible business models if they should be able to cover a wide range of requirements imposed by the different types of information businesses. This leads to non-trivial functional and security requirements both on the provider and consumer side, for which we introduce an architecture and a system implementation, webXice. We focus on the question, how participants with minimal technological requisites, i.e. solely standard Web browsers available, can be technologically enabled to articipate in the information commerce at a system level, while not sacrificing the functionality and security required by an autonomous participant in an information commerce scenario. In particular, we propose an implementation strategy to efficiently support persistent message logging for light-weight clients, that enables clients to collect and manage non-reputiable messages as proofs. We believe that the capability to support minimal system platforms is a necessary precondition for the wide-spread use of any information commerce infrastructure
Configuration of Distributed Message Converter Systems using Performance Modeling
To find a configuration of a distributed system satisfying performance goals is a complex search problem that involves many design parameters, like hardware selection, job distribution and process configuration. Performance models are a powerful tools to analyse potential system configurations, however, their evaluation is expensive, such that only a limited number of possible configurations can be evaluated. In this paper we present a systematic method to find a satisfactory configuration with feasible effort, based on a two-step approach. First, using performance estimates a hardware configuration is determined and then the software configuration is incrementally optimized evaluating Layered Queueing Network models. We applied this method to the design of performant EDI converter systems in the financial domain, where increasing message volumes need to be handled due to the increasing importance of B2B interaction
The American Commitment to Private International Political Communications: A View of Free Europe, Inc.
The principal service of distributed hash tables (DHTs) is route(id, data), which sends data to a peer responsible for id, using typically O(log(# of peers)) overlay hops. Certain applications like peer-to-peer information retrieval generate billions of small messages that are concurrently inserted into a DHT. These applications can generate messages faster than the DHT can process them. To support such demanding applications, a DHT needs a congestion control mechanism to efficiently handle high loads of messages. In this paper we provide an extended study on congestion control for DHTs: we present a theoretical analysis that demonstrates that congestion control for DHTs is absolutely necessary for applications that provide elastic traffic. We then present a new congestion control algorithm for DHTs. We provide extensive live evaluations in a ModelNet cluster and the PlanetLab test bed, which show that our algorithm is nearly loss-free, fair, and provides low lookup times and high throughput under cross-load.QC 20140707</p
280 Birds with One Stone: Inducing Multilingual Taxonomies from Wikipedia using Character-level Classification
We propose a simple, yet effective, approach towards inducing multilingual
taxonomies from Wikipedia. Given an English taxonomy, our approach leverages
the interlanguage links of Wikipedia followed by character-level classifiers to
induce high-precision, high-coverage taxonomies in other languages. Through
experiments, we demonstrate that our approach significantly outperforms the
state-of-the-art, heuristics-heavy approaches for six languages. As a
consequence of our work, we release presumably the largest and the most
accurate multilingual taxonomic resource spanning over 280 languages
- …
