124 research outputs found

    Network Coding Based Reliable Multi-path Routing in Wireless Sensor Network

    Get PDF
    This thesis proposes a network coding based and easily-realizable composite network topologic model. It is composed of disjoint multi-path routing (DMR) and braided multi-path routing (BMR) in wireless sensor network, and network coding technology is also employed in our multi-path routing. With the use of the Node Coding technology and the multi-path technology, NC-RMR (Network Coding based Reliable disjoint and braided Multipath Routing) strengthens the network reliability, and meanwhile reduces the number of desired sub paths and helps to balance the network loads. Finally, its theoretic correctness and performance superiority was verified in the simulation. DOI: http://dx.doi.org/10.11591/telkomnika.v11i12.371

    NCStorage: A Prototype of Network Coding-based Distributed Storage System

    Get PDF
    Recent studies have shown that network coding can improve the performance of the distributed storage systems. However, most of these studies are theoretical which mainly focus on the bandwidth efficiency. This paper aims to provide a practical system, so NCStorage, a network-coding-based distributed storage system, is implemented. NCStorage implements network coding on the Internet, so users from all over the world can access it. Unlike traditional technologies such as erasure coding and fountain coding, re-encoding operation at storage servers is required by NCStorage. We observe that, benefiting from the re-encoding at the storage servers, the required repair bandwidth employed to repair a failed storage server is reduced, the computation overhead is balanced, and the security is enhanced. Both the encoding at the clients and the re-encoding at the storage servers are based on a deterministic algorithm. Finally, we deploy 8 storage servers in different places to evaluate the performance of the NCStorage, and the experimental results validate the analysis results.  DOI: http://dx.doi.org/10.11591/telkomnika.v11i12.3709

    DisDiff: Unsupervised Disentanglement of Diffusion Probabilistic Models

    Full text link
    Targeting to understand the underlying explainable factors behind observations and modeling the conditional generation process on these factors, we connect disentangled representation learning to Diffusion Probabilistic Models (DPMs) to take advantage of the remarkable modeling ability of DPMs. We propose a new task, disentanglement of (DPMs): given a pre-trained DPM, without any annotations of the factors, the task is to automatically discover the inherent factors behind the observations and disentangle the gradient fields of DPM into sub-gradient fields, each conditioned on the representation of each discovered factor. With disentangled DPMs, those inherent factors can be automatically discovered, explicitly represented, and clearly injected into the diffusion process via the sub-gradient fields. To tackle this task, we devise an unsupervised approach named DisDiff, achieving disentangled representation learning in the framework of DPMs. Extensive experiments on synthetic and real-world datasets demonstrate the effectiveness of DisDiff.Comment: Accepted by NeurIPS 202

    Breaking through the learning plateaus of in-context learning in Transformer

    Full text link
    In-context learning, i.e., learning from context examples, is an impressive ability of Transformer. Training Transformers to possess this in-context learning skill is computationally intensive due to the occurrence of learning plateaus, which are periods within the training process where there is minimal or no enhancement in the model's in-context learning capability. To study the mechanism behind the learning plateaus, we conceptually seperate a component within the model's internal representation that is exclusively affected by the model's weights. We call this the "weights component", and the remainder is identified as the "context component". By conducting meticulous and controlled experiments on synthetic tasks, we note that the persistence of learning plateaus correlates with compromised functionality of the weights component. Recognizing the impaired performance of the weights component as a fundamental behavior drives learning plateaus, we have developed three strategies to expedite the learning of Transformers. The effectiveness of these strategies is further confirmed in natural language processing tasks. In conclusion, our research demonstrates the feasibility of cultivating a powerful in-context learning ability within AI systems in an eco-friendly manner

    Vector-based Representation is the Key: A Study on Disentanglement and Compositional Generalization

    Full text link
    Recognizing elementary underlying concepts from observations (disentanglement) and generating novel combinations of these concepts (compositional generalization) are fundamental abilities for humans to support rapid knowledge learning and generalize to new tasks, with which the deep learning models struggle. Towards human-like intelligence, various works on disentangled representation learning have been proposed, and recently some studies on compositional generalization have been presented. However, few works study the relationship between disentanglement and compositional generalization, and the observed results are inconsistent. In this paper, we study several typical disentangled representation learning works in terms of both disentanglement and compositional generalization abilities, and we provide an important insight: vector-based representation (using a vector instead of a scalar to represent a concept) is the key to empower both good disentanglement and strong compositional generalization. This insight also resonates the neuroscience research that the brain encodes information in neuron population activity rather than individual neurons. Motivated by this observation, we further propose a method to reform the scalar-based disentanglement works (β\beta-TCVAE and FactorVAE) to be vector-based to increase both capabilities. We investigate the impact of the dimensions of vector-based representation and one important question: whether better disentanglement indicates higher compositional generalization. In summary, our study demonstrates that it is possible to achieve both good concept recognition and novel concept composition, contributing an important step towards human-like intelligence.Comment: Preprin
    corecore