1,634 research outputs found

    Effect of correlation on the traffic capacity of Time Varying Communication Network

    Full text link
    The network topology and the routing strategy are major factors to affect the traffic dynamics of the network. In this work, we aim to design an optimal time-varying network structure and an efficient route is allocated to each user in the network. The network topology is designed by considering addition, removal, and rewiring of links. At each time instants, a new node connects with an existing node based on the degree and correlation with its neighbor. Traffic congestion is handled by rewiring of some congested links along with the removal of the anti-preferential and correlated links. Centrality plays an important role to find the most important node in the network. The more a node is central, the more it can be used for the shortest route of the user pairs and it can be congested due to a large number of data coming from its neighborhood. Therefore, routes of the users are selected such that the sum of the centrality of the nodes appearing in the user's route is minimum. Thereafter, we analyze the network structure by using various network properties such as the clustering coefficient, centrality, average shortest path, rich club coefficient, average packet travel time and order parameter

    A Framework for Individualizing Predictions of Disease Trajectories by Exploiting Multi-Resolution Structure

    Full text link
    For many complex diseases, there is a wide variety of ways in which an individual can manifest the disease. The challenge of personalized medicine is to develop tools that can accurately predict the trajectory of an individual's disease, which can in turn enable clinicians to optimize treatments. We represent an individual's disease trajectory as a continuous-valued continuous-time function describing the severity of the disease over time. We propose a hierarchical latent variable model that individualizes predictions of disease trajectories. This model shares statistical strength across observations at different resolutions--the population, subpopulation and the individual level. We describe an algorithm for learning population and subpopulation parameters offline, and an online procedure for dynamically learning individual-specific parameters. Finally, we validate our model on the task of predicting the course of interstitial lung disease, a leading cause of death among patients with the autoimmune disease scleroderma. We compare our approach against state-of-the-art and demonstrate significant improvements in predictive accuracy.Comment: Appeared in Neural Information Processing Systems (NIPS) 201

    Fair End to End Window Based Congestion Control in Time Varying Data Communication Networks

    Full text link
    Communication networks are time-varying and hence, fair sharing of network resources among the users in such a dynamic environment is a challenging task. In this context, a time-varying network model is designed and the shortest user's route is found. In the designed network model, an end to end window-based congestion control scheme is developed with the help of internal nodes or router and the end user can get implicit feedback (RTT and throughput). This scheme is considered as fair if the allocation of resources among users minimizes overall congestion or backlog in the networks. Window update approach is based on a multi-class fluid model and is updated dynamically by considering delays (communication, propagation and queuing) and the backlog of packets in the user's routes. Convergence and stability of the window size are obtained using a Lyapunov function. A comparative study with other window-based methods is also provided

    Learning (Predictive) Risk Scores in the Presence of Censoring due to Interventions

    Full text link
    A large and diverse set of measurements are regularly collected during a patient's hospital stay to monitor their health status. Tools for integrating these measurements into severity scores, that accurately track changes in illness severity, can improve clinicians ability to provide timely interventions. Existing approaches for creating such scores either 1) rely on experts to fully specify the severity score, or 2) train a predictive score, using supervised learning, by regressing against a surrogate marker of severity such as the presence of downstream adverse events. The first approach does not extend to diseases where an accurate score cannot be elicited from experts. The second approach often produces scores that suffer from bias due to treatment-related censoring (Paxton, 2013). We propose a novel ranking based framework for disease severity score learning (DSSL). DSSL exploits the following key observation: while it is challenging for experts to quantify the disease severity at any given time, it is often easy to compare the disease severity at two different times. Extending existing ranking algorithms, DSSL learns a function that maps a vector of patient's measurements to a scalar severity score such that the resulting score is temporally smooth and consistent with the expert's ranking of pairs of disease states. We apply DSSL to the problem of learning a sepsis severity score using a large, real-world dataset. The learned scores significantly outperform state-of-the-art clinical scores in ranking patient states by severity and in early detection of future adverse events. We also show that the learned disease severity trajectories are consistent with clinical expectations of disease evolution. Further, using simulated datasets, we show that DSSL exhibits better generalization performance to changes in treatment patterns compared to the above approaches

    Discretizing Logged Interaction Data Biases Learning for Decision-Making

    Full text link
    Time series data that are not measured at regular intervals are commonly discretized as a preprocessing step. For example, data about customer arrival times might be simplified by summing the number of arrivals within hourly intervals, which produces a discrete-time time series that is easier to model. In this abstract, we show that discretization introduces a bias that affects models trained for decision-making. We refer to this phenomenon as discretization bias, and show that we can avoid it by using continuous-time models instead.Comment: This is a standalone short paper describing a new type of bias that can arise when learning from time series data for sequential decision-making problem

    Discovering shared and individual latent structure in multiple time series

    Full text link
    This paper proposes a nonparametric Bayesian method for exploratory data analysis and feature construction in continuous time series. Our method focuses on understanding shared features in a set of time series that exhibit significant individual variability. Our method builds on the framework of latent Diricihlet allocation (LDA) and its extension to hierarchical Dirichlet processes, which allows us to characterize each series as switching between latent ``topics'', where each topic is characterized as a distribution over ``words'' that specify the series dynamics. However, unlike standard applications of LDA, we discover the words as we learn the model. We apply this model to the task of tracking the physiological signals of premature infants; our model obtains clinically significant insights as well as useful features for supervised learning tasks.Comment: Additional supplementary section in tex fil

    Preventing Failures Due to Dataset Shift: Learning Predictive Models That Transport

    Full text link
    Classical supervised learning produces unreliable models when training and target distributions differ, with most existing solutions requiring samples from the target domain. We propose a proactive approach which learns a relationship in the training domain that will generalize to the target domain by incorporating prior knowledge of aspects of the data generating process that are expected to differ as expressed in a causal selection diagram. Specifically, we remove variables generated by unstable mechanisms from the joint factorization to yield the Surgery Estimator---an interventional distribution that is invariant to the differences across environments. We prove that the surgery estimator finds stable relationships in strictly more scenarios than previous approaches which only consider conditional relationships, and demonstrate this in simulated experiments. We also evaluate on real world data for which the true causal diagram is unknown, performing competitively against entirely data-driven approaches.Comment: In Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS), 2019. Previously presented at the NeurIPS 2018 Causal Learning Worksho

    Efficient Edge Rewiring Strategies for Enhancement in Network Capacity

    Full text link
    The structure of the network has great impact on its traffic dynamics. Most of the real world networks follow the heterogeneous structure and exhibit scale-free feature. In scale-free network, a new node prefers to connect with hub nodes and the network capacity is curtailed by smaller degree nodes. Therefore, we propose rewiring a fraction of links in the network, to improve the network transport efficiency. In this paper, we discuss some efficient link rewiring strategies and perform simulations on scale-free networks, confirming the effectiveness of these strategies. The rewiring strategies actually reduce the centrality of the nodes having higher betweenness centrality. After the link rewiring process, the degree distribution of the network remains the same. This work will be beneficial for the enhancement of network performance.Comment: 14 page

    Tutorial: Safe and Reliable Machine Learning

    Full text link
    This document serves as a brief overview of the "Safe and Reliable Machine Learning" tutorial given at the 2019 ACM Conference on Fairness, Accountability, and Transparency (FAT* 2019). The talk slides can be found here: https://bit.ly/2Gfsukp, while a video of the talk is available here: https://youtu.be/FGLOCkC4KmE, and a complete list of references for the tutorial here: https://bit.ly/2GdLPme.Comment: Overview of the "Safe and Reliable Machine Learning" tutorial given at the 2019 ACM Conference on Fairness, Accountability, and Transparency (FAT* 2019

    Deformable Distributed Multiple Detector Fusion for Multi-Person Tracking

    Full text link
    This paper addresses fully automated multi-person tracking in complex environments with challenging occlusion and extensive pose variations. Our solution combines multiple detectors for a set of different regions of interest (e.g., full-body and head) for multi-person tracking. The use of multiple detectors leads to fewer miss detections as it is able to exploit the complementary strengths of the individual detectors. While the number of false positives may increase with the increased number of bounding boxes detected from multiple detectors, we propose to group the detection outputs by bounding box location and depth information. For robustness to significant pose variations, deformable spatial relationship between detectors are learnt in our multi-person tracking system. On RGBD data from a live Intensive Care Unit (ICU), we show that the proposed method significantly improves multi-person tracking performance over state-of-the-art methods
    corecore