475 research outputs found
The Computational Power of Beeps
In this paper, we study the quantity of computational resources (state
machine states and/or probabilistic transition precision) needed to solve
specific problems in a single hop network where nodes communicate using only
beeps. We begin by focusing on randomized leader election. We prove a lower
bound on the states required to solve this problem with a given error bound,
probability precision, and (when relevant) network size lower bound. We then
show the bound tight with a matching upper bound. Noting that our optimal upper
bound is slow, we describe two faster algorithms that trade some state
optimality to gain efficiency. We then turn our attention to more general
classes of problems by proving that once you have enough states to solve leader
election with a given error bound, you have (within constant factors) enough
states to simulate correctly, with this same error bound, a logspace TM with a
constant number of unary input tapes: allowing you to solve a large and
expressive set of problems. These results identify a key simplicity threshold
beyond which useful distributed computation is possible in the beeping model.Comment: Extended abstract to appear in the Proceedings of the International
Symposium on Distributed Computing (DISC 2015
Reconstruction of Network Evolutionary History from Extant Network Topology and Duplication History
Genome-wide protein-protein interaction (PPI) data are readily available
thanks to recent breakthroughs in biotechnology. However, PPI networks of
extant organisms are only snapshots of the network evolution. How to infer the
whole evolution history becomes a challenging problem in computational biology.
In this paper, we present a likelihood-based approach to inferring network
evolution history from the topology of PPI networks and the duplication
relationship among the paralogs. Simulations show that our approach outperforms
the existing ones in terms of the accuracy of reconstruction. Moreover, the
growth parameters of several real PPI networks estimated by our method are more
consistent with the ones predicted in literature.Comment: 15 pages, 5 figures, submitted to ISBRA 201
Network Archaeology: Uncovering Ancient Networks from Present-day Interactions
Often questions arise about old or extinct networks. What proteins interacted
in a long-extinct ancestor species of yeast? Who were the central players in
the Last.fm social network 3 years ago? Our ability to answer such questions
has been limited by the unavailability of past versions of networks. To
overcome these limitations, we propose several algorithms for reconstructing a
network's history of growth given only the network as it exists today and a
generative model by which the network is believed to have evolved. Our
likelihood-based method finds a probable previous state of the network by
reversing the forward growth model. This approach retains node identities so
that the history of individual nodes can be tracked. We apply these algorithms
to uncover older, non-extant biological and social networks believed to have
grown via several models, including duplication-mutation with complementarity,
forest fire, and preferential attachment. Through experiments on both synthetic
and real-world data, we find that our algorithms can estimate node arrival
times, identify anchor nodes from which new nodes copy links, and can reveal
significant features of networks that have long since disappeared.Comment: 16 pages, 10 figure
Algorithms to Explore the Structure and Evolution of Biological Networks
High-throughput experimental protocols have revealed thousands of relationships amongst genes and proteins under various conditions. These putative associations are being aggressively mined to decipher the structural and functional architecture of the cell. One useful tool for exploring this data has been computational network analysis. In this thesis, we propose a collection of novel algorithms to explore the structure and evolution of large, noisy, and sparsely annotated biological networks.
We first introduce two information-theoretic algorithms to extract interesting patterns and modules embedded in large graphs. The first, graph summarization, uses the minimum description length principle to find compressible parts of the graph. The second, VI-Cut, uses the variation of information to non-parametrically find groups of topologically cohesive and similarly annotated nodes in the network. We show that both algorithms find structure in biological data that is consistent with known biological processes, protein complexes, genetic diseases, and operational taxonomic units. We also propose several algorithms to systematically generate an ensemble of near-optimal network clusterings and show how these multiple views can be used together to identify clustering dynamics that any single solution approach would miss.
To facilitate the study of ancient networks, we introduce a framework called ``network archaeology'') for reconstructing the node-by-node and edge-by-edge arrival history of a network. Starting with a present-day network, we apply a probabilistic growth model backwards in time to find high-likelihood previous states of the graph. This allows us to explore how interactions and modules may have evolved over time. In experiments with real-world social and biological networks, we find that our algorithms can recover significant features of ancestral networks that have long since disappeared.
Our work is motivated by the need to understand large and complex biological systems that are being revealed to us by imperfect data. As data continues to pour in, we believe that computational network analysis will continue to be an essential tool towards this end
Learning the Structural Vocabulary of a Network
Networks have become instrumental in deciphering how information is processed and transferred within systems in almost every scientific field today. Nearly all network analyses, however, have relied on humans to devise structural features of networks believed to be most discriminative for an application. We present a framework for comparing and classifying networks without human-crafted features using deep learning. After training, autoencoders contain hidden units that encode a robust structural vocabulary for succinctly describing graphs. We use this feature vocabulary to tackle several network mining problems and find improved predictive performance versus many popular features used today. These problems include uncovering growth mechanisms driving the evolution of networks, predicting protein network fragility, and identifying environmental niches for metabolic networks. Deep learning offers a principled approach for mining complex networks and tackling graph-theoretic problems
Thank You, Come Again: Examining the Role of Quality and Trust on eCommerce Repurchase Intentions
As the pertinence of online consumer shopping continues to grow, more and more e-retailers are erecting websites. In this increasingly competitive environment, building customer loyalty and retaining customers is integral to achieving sustained profitability. While one stream of literature has suggested that e-retailers should concentrate on improving quality, another stream has recommended that the focus should be on building trust with customers. This paper represents an early, working attempt to synthesize these parallel streams, investigating how the interplay between three forms of quality (information, system, and service) and trust help to retain customers. Integrating information systems and marketing research, the results of this paper suggest that trust mediates the relationship between each type of quality and both satisfaction and repurchase intentions. Furthermore, of the three types of quality that are examined, service quality engenders the greatest impact on trust, followed by information quality then system quality. The paper concludes with a discussion of this preliminary model as well as directions for the future development of this project
Recommended from our members
DomAINS - DOMain Adapted INStructions Dataset Generation framework
Domain-specific large language models (LLMs) demonstrate strong domain expertise by training on large-scale, domain-aligned instruction data. However, manually constructing such datasets is resource-intensive due to the need for expert annotators. A promising alternative is to use LLMs to synthesize training data. While existing frameworks effectively generate general instruction datasets, generating domain-specific instruction datasets presents the following main challenges: the data must (1) be strongly aligned with the target domain, (2) exhibit high in-domain diversity, and (3) be factually grounded on domain-specific knowledge. In this paper, we present DomAINS, a three-stage framework to generate instruction datasets for any target domain using only a domain name and a brief description. DomAINS constructs a tree of domain-relevant keywords to increase in-domain diversity, retrieves factually grounded domain articles from Bing, and prompts an LLM to generate domain-aligned instruction data based on the retrieved articles. Our evaluation across nine domains shows that models tuned on DomAINS-generated dataset achieve 60–95% win rate over those trained on datasets from existing synthetic frameworks for general domains, demonstrating the effectiveness of our approach
They Call for Help, But Don\u27t Always Listen: The Development of the User-Help Desk Knowledge Application Model
The IS help desk function plays a central role in boundary spanning knowledge exchanges within organizations. Help desk employees provide technical support to users in an effort to transfer knowledge and enable users to autonomously apply this knowledge in the future. However, despite their importance, little is known about the factors that affect knowledge application within this context. Adopting interpersonal influence theory, this paper develops a model that examines how dimensions of source credibility - expertise, trustworthiness, and attractiveness impact users’ knowledge application in a help desk environment. The model is tested using a sample of working adults at a large Midwestern hospital who had significant experience requesting help from an IS help desk. Results indicate that all three dimensions of source credibility predict users’ ability to apply the knowledge transferred from a help desk employee. The implications of these results are discussed
- …
