537 research outputs found

    Direct User Calls from the Kernel: Design and Implementation

    Get PDF
    Traditional, general-purpose operating systems strictly separate user processes from the kernel. Processes can only communicate with the kernel through system calls. As a means to ensure system security, system calls inevitably involve performance overhead. Direct User Callback from the Kernel, or DUCK, is a framework that improves the performance of network-centric applications by executing a part of application code directly from inside the kernel. Because the code runs in kernel mode and can access kernel memory directly, DUCK is able to eliminate two important sources of system call overhead, namely mode switches and data copying. One issue with DUCK is how to design an application programming interface (API) that is general, efficient, and easy to use. In this thesis, we present the design of the DUCK API, which includes functions for both direct user code execution and zero-copy buffer management. We have implemented DUCK prototypes on the Solaris/SPARC platform. An efficient way to implement direct user code invocation is through memory sharing between the kernel and user processes. However, because Solaris/SPARC separates the user and kernel address spaces, achieving memory sharing is difficult. In the thesis, we study the SPARC architecture and the Solaris virtual memory subsystem, and discuss three potential approaches to support memory sharing required by DUCK. We proceed to present micro-benchmark experiments demonstrating that our DUCK prototype implementation is capable of improving the peak throughput of a simple UDP forwarder by 28% to 44%

    A Differentiable Partially Observable Generalized Linear Model with Forward-Backward Message Passing

    Full text link
    The partially observable generalized linear model (POGLM) is a powerful tool for understanding neural connectivity under the assumption of existing hidden neurons. With spike trains only recorded from visible neurons, existing works use variational inference to learn POGLM meanwhile presenting the difficulty of learning this latent variable model. There are two main issues: (1) the sampled Poisson hidden spike count hinders the use of the pathwise gradient estimator in VI; and (2) the existing design of the variational model is neither expressive nor time-efficient, which further affects the performance. For (1), we propose a new differentiable POGLM, which enables the pathwise gradient estimator, better than the score function gradient estimator used in existing works. For (2), we propose the forward-backward message-passing sampling scheme for the variational model. Comprehensive experiments show that our differentiable POGLMs with our forward-backward message passing produce a better performance on one synthetic and two real-world datasets. Furthermore, our new method yields more interpretable parameters, underscoring its significance in neuroscience

    Markovian Gaussian Process: A Universal State-Space Representation for Stationary Temporal Gaussian Process

    Full text link
    Gaussian Processes (GPs) and Linear Dynamical Systems (LDSs) are essential time series and dynamic system modeling tools. GPs can handle complex, nonlinear dynamics but are computationally demanding, while LDSs offer efficient computation but lack the expressive power of GPs. To combine their benefits, we introduce a universal method that allows an LDS to mirror stationary temporal GPs. This state-space representation, known as the Markovian Gaussian Process (Markovian GP), leverages the flexibility of kernel functions while maintaining efficient linear computation. Unlike existing GP-LDS conversion methods, which require separability for most multi-output kernels, our approach works universally for single- and multi-output stationary temporal kernels. We evaluate our method by computing covariance, performing regression tasks, and applying it to a neuroscience application, demonstrating that our method provides an accurate state-space representation for stationary temporal GPs

    Unveiling Decentralization: A Comprehensive Review of Technologies, Comparison, Challenges in Bitcoin, Ethereum, and Solana Blockchain

    Full text link
    Bitcoin stands as a groundbreaking development in decentralized exchange throughout human history, enabling transactions without the need for intermediaries. By leveraging cryptographic proof mechanisms, Bitcoin eliminates the reliance on third-party financial institutions. Ethereum, ranking as the second-largest cryptocurrency by market capitalization, builds upon Bitcoin's groundwork by introducing smart contracts and decentralized applications. Ethereum strives to surpass the limitations of Bitcoin's scripting language, achieving full Turing-completeness for executing intricate computational tasks. Solana introduces a novel architecture for high-performance blockchain, employing timestamps to validate decentralized transactions and significantly boosting block creation throughput. Through a comprehensive examination of these blockchain technologies, their distinctions, and the associated challenges, this paper aims to offer valuable insights and comparative analysis for both researchers and practitioners

    Multi-Region Markovian Gaussian Process: An Efficient Method to Discover Directional Communications Across Multiple Brain Regions

    Full text link
    Studying the complex interactions between different brain regions is crucial in neuroscience. Various statistical methods have explored the latent communication across multiple brain regions. Two main categories are the Gaussian Process (GP) and Linear Dynamical System (LDS), each with unique strengths. The GP-based approach effectively discovers latent variables with frequency bands and communication directions. Conversely, the LDS-based approach is computationally efficient but lacks powerful expressiveness in latent representation. In this study, we merge both methodologies by creating an LDS mirroring a multi-output GP, termed Multi-Region Markovian Gaussian Process (MRM-GP). Our work establishes a connection between an LDS and a multi-output GP that explicitly models frequencies and phase delays within the latent space of neural recordings. Consequently, the model achieves a linear inference cost over time points and provides an interpretable low-dimensional representation, revealing communication directions across brain regions and separating oscillatory communications into different frequency bands

    ViLTA: Enhancing Vision-Language Pre-training through Textual Augmentation

    Full text link
    Vision-language pre-training (VLP) methods are blossoming recently, and its crucial goal is to jointly learn visual and textual features via a transformer-based architecture, demonstrating promising improvements on a variety of vision-language tasks. Prior arts usually focus on how to align visual and textual features, but strategies for improving the robustness of model and speeding up model convergence are left insufficiently explored. In this paper, we propose a novel method ViLTA, comprising of two components to further facilitate the model to learn fine-grained representations among image-text pairs. For Masked Language Modeling (MLM), we propose a cross-distillation method to generate soft labels to enhance the robustness of model, which alleviates the problem of treating synonyms of masked words as negative samples in one-hot labels. For Image-Text Matching (ITM), we leverage the current language encoder to synthesize hard negatives based on the context of language input, encouraging the model to learn high-quality representations by increasing the difficulty of the ITM task. By leveraging the above techniques, our ViLTA can achieve better performance on various vision-language tasks. Extensive experiments on benchmark datasets demonstrate that the effectiveness of ViLTA and its promising potential for vision-language pre-training.Comment: 15 pages, 5 figure

    Global expression profiling reveals regulation of CTGF/CCN2 during lactogenic differentiation

    Get PDF
    Mammary epithelial cells go through a series of developmental changes during pregnancy and lactation including proliferation, differentiation, secretion and apoptosis. HC11 mouse mammary epithelial cells, which undergo lactogen-induced differentiation in cell culture, were used to follow the changes in gene expression during this process. The expression profiles of over 20,000 genes were compared in HC11 cells undergoing lactogenic differentiation to non-differentiated cells using DNA microarray analysis. Greater than two fold changes were detected in 998 genes in the differentiated cells versus growth controls. Several genes including CTGF/CCN2 exhibited greater than five-fold increase. Validation of the gene expression pattern for more than twenty genes was performed. The results indicate the involvement of numerous genes and pathways in the differentiation of mouse mammary epithelial cells in culture and they identify genetic pathways associated with specific transcriptional regulation. In addition, the expression of a subset of genes regulated by lactogenic differentiation in HC11 cells, including CTGF/CCN2 and osteopontin, was examined in mouse mammary glands revealing expression during pregnancy and lactation that declined during involution of the glands. To probe the mechanism by which epidermal growth factor (EGF), a known inhibitor of lactogenic differentiation in HC11 cells, blocks lactogenesis, the HC11 cells stimulated with lactogenic hormone in the presence of EGF were profiled. This data revealed EGF regulation of a specific subset of genes including important cell cycle regulators. The studies confirm the value of expression profiling in defining gene transcription associated with differentiation of mammary epithelial cells
    corecore