309 research outputs found

    A roadmap toward the synthesis of life

    Get PDF
    The synthesis of life from non-living matter has captivated and divided scientists for centuries. This bold goal aims at unraveling the fundamental principles of life and leveraging its unique features, such as its resilience, sustainability, and ability to evolve. Synthetic life represents more than an academic milestone—it has the potential to revolutionize biotechnology, medicine, and materials science. Although the fields of synthetic biology, systems chemistry, and biophysics have made great strides toward synthetic life, progress has been hindered by social, philosophical, and technical challenges, such as vague goals, misaligned interdisciplinary efforts, and incompletely addressing public and ethical concerns. Our perspective offers a roadmap toward the synthesis of life based on discussions during a 2-week workshop with scientists from around the globe.</p

    Mis-classified Vector Guided Softmax Loss for Face Recognition

    Full text link
    Face recognition has witnessed significant progress due to the advances of deep convolutional neural networks (CNNs), the central task of which is how to improve the feature discrimination. To this end, several margin-based (\textit{e.g.}, angular, additive and additive angular margins) softmax loss functions have been proposed to increase the feature margin between different classes. However, despite great achievements have been made, they mainly suffer from three issues: 1) Obviously, they ignore the importance of informative features mining for discriminative learning; 2) They encourage the feature margin only from the ground truth class, without realizing the discriminability from other non-ground truth classes; 3) The feature margin between different classes is set to be same and fixed, which may not adapt the situations very well. To cope with these issues, this paper develops a novel loss function, which adaptively emphasizes the mis-classified feature vectors to guide the discriminative feature learning. Thus we can address all the above issues and achieve more discriminative face features. To the best of our knowledge, this is the first attempt to inherit the advantages of feature margin and feature mining into a unified loss function. Experimental results on several benchmarks have demonstrated the effectiveness of our method over state-of-the-art alternatives.Comment: Accepted by AAAI2020 as Oral presentation. arXiv admin note: substantial text overlap with arXiv:1812.1131

    Regulation of CCL5 Expression in Smooth Muscle Cells Following Arterial Injury

    Get PDF
    Chemokines play a crucial role in inflammation and in the pathophysiology of atherosclerosis by recruiting inflammatory immune cells to the endothelium. Chemokine CCL5 has been shown to be involved in atherosclerosis progression. However, little is known about how CCL5 is regulated in vascular smooth muscle cells. In this study we report that CCL5 mRNA expression was induced and peaked in aorta at day 7 and then declined after balloon artery injury, whereas IP-10 and MCP-1 mRNA expression were induced and peaked at day 3 and then rapidly declined

    PQCache: Product Quantization-based KVCache for Long Context LLM Inference

    Full text link
    As the field of Large Language Models (LLMs) continues to evolve, the context length in inference is steadily growing. Key-Value Cache (KVCache), a crucial component in LLM inference, has now become the primary memory bottleneck due to limited GPU memory. Current methods selectively determine suitable keys and values for self-attention computation in LLMs to address the issue. However, they either fall short in maintaining model quality or result in high serving latency. Drawing inspiration from advanced embedding retrieval techniques used in the database community, we consider the storage and searching of KVCache as a typical embedding retrieval problem. We propose PQCache, which employs Product Quantization (PQ) to manage KVCache, maintaining model quality while ensuring low serving latency. During the prefilling phase, we apply PQ to tokens' keys for each LLM layer and head. During the autoregressive decoding phase, for each newly generated token, we first identify important tokens through Maximum Inner-Product Search (MIPS) using PQ codes and centroids, then fetch the corresponding key-value pairs for self-attention computation. Through meticulous design of overlapping and caching, we minimize any additional computation and communication overhead during both phases. Extensive experiments show that PQCache achieves both effectiveness and efficiency. It maintains model quality even when only 1/5 of the tokens are involved in attention, while attaining acceptable system latency
    corecore