1,211 research outputs found
Mutagenesis analysis of the zinc-finger antiviral protein
BACKGROUND: The zinc-finger antiviral protein (ZAP) specifically inhibits the replication of certain viruses, including murine leukemia virus (MLV), by preventing the accumulation of viral mRNA in the cytoplasm. ZAP directly binds to the viral mRNA through the zinc-finger motifs and recruits the RNA exosome to degrade the target RNA. RNA helicase p72 is required for the optimal function of ZAP. In an attempt to understand the structure-function relationship of ZAP, we performed alanine scanning analysis. RESULTS: A series of ZAP mutants was generated, in which three consecutive amino acids were replaced with three alanines. The mutants were analyzed for their antiviral activities against pseudotyped MLV vector. Out of the nineteen mutants analyzed, seven displayed significantly lower antiviral activities. Two mutations were in the very N-terminal domain, and five mutations were within or around the first and second zinc-finger motifs. These mutants were further analyzed for their abilities to bind to the target RNA, the exosome, and the RNA helicase p72. Mutants Nm3 and Nm63 lost the ability to bind to RNA. Mutants Nm 63 and Nm93 displayed compromised interaction with p72, while the binding of Nm133 to p72 was very modest. The interactions of all the mutants with the exosome were comparable to wild type ZAP. CONCLUSIONS: The integrity of the very N-terminal domain and the first and second zinc-finger motifs appear to be required for ZAP's antiviral activity. Analyses of the mutants for their abilities to interact with the target RNA and RNA helicase p72 confirmed our previous results. The mutants that bind normally to the target RNA, the exosome, and the RNA helicase p72 may be useful tools for further understanding the mechanism underlying ZAP's antiviral activity
Metal‐Organic Frameworks and their Applications in Hydrogen and Oxygen Evolution Reactions
The hydrogen evolution reaction (HER) and oxygen evolution reaction (OER) play a vital role in many energy storage and conversion systems, including water splitting, rechargeable metal‐air batteries, and the unitized regenerative fuel cells. The noble‐metal catalysts based on Pt, Ir, and Au are the best electrocatalysts for the HER/OER, but they suffer from high price and scarcity problems. Therefore, it is urgently necessary to develop efficient, low‐cost, and environment‐friendly non‐noble metal electrocatalysts. Metal‐organic frameworks (MOFs) are crystalline materials with porous network structure. MOFs possess various compositions, large specific surface area, tunable pore structures, and they are easily functionalized. MOFs have been widely studied and applied in many fields, such as gas adsorption/separation, drug delivery, catalysis, magnetism, and optoelectronics. Recently, MOFs‐based electrocatalysts for HER/OER have been rapidly developed. These MOFs‐based catalysts exhibit excellent catalytic performance for HER/OER, demonstrating a promising application prospect in HER/OER. In this chapter, the concept, structure, category, and synthesis of MOFs will be first introduced briefly. Then, the applications of the MOFs‐based catalysts for HER/OER in recent years will be discussed in details. Specially, the synthesis, structure, and catalytic performance for HER/OER of the MOFs‐based catalysts will be emphatically discussed
Graph traverse reference network for sign language corpus retrieval in the wild
Sign languages are the primary languages of the deaf community as well as hearing individuals who are unable to speak, which engage the visual-manual modality to convey meanings. In recent years, there has been an explosive growth of sign language videos available from video streaming and social media service platforms. Given the size of these corpora, sign language users often face significant challenges in effectively acquiring the information they need. Therefore, we propose a novel deep learning architecture, namely Graph Traverse Reference Network (GTRN), allowing visual signing queries to retrieve relevant sign language videos (documents) from a large corpus. GTRN introduces a traverse graph, which provides coarse-to-fine reference information in a hierarchical manner from frame-level to body-part-level observations. A reference-based attention is devised to obtain the embedding for a visual input of each level, which allows the computations to be allocated and processed at difference locations regarding local devices and central servers. A contrastive learning strategy optimizes GTRN in pursuit of a joint latent space for the queries and the documents by their meanings. Moreover, GTRN is compatible to leverage existing general visual representation foundation models, by which their resulted embeddings are used as the frame-level reference of GTRN. To the best of our knowledge, it is one of the first studies on using visual signing queries for retrieving sign language videos in a real-world setting and comprehensive experiments were conducted which demonstrated the effectiveness of our proposed method
Efficient Prompt Tuning of Large Vision-Language Model for Fine-Grained Ship Classification
Fine-grained ship classification in remote sensing (RS-FGSC) poses a
significant challenge due to the high similarity between classes and the
limited availability of labeled data, limiting the effectiveness of traditional
supervised classification methods. Recent advancements in large pre-trained
Vision-Language Models (VLMs) have demonstrated impressive capabilities in
few-shot or zero-shot learning, particularly in understanding image content.
This study delves into harnessing the potential of VLMs to enhance
classification accuracy for unseen ship categories, which holds considerable
significance in scenarios with restricted data due to cost or privacy
constraints. Directly fine-tuning VLMs for RS-FGSC often encounters the
challenge of overfitting the seen classes, resulting in suboptimal
generalization to unseen classes, which highlights the difficulty in
differentiating complex backgrounds and capturing distinct ship features. To
address these issues, we introduce a novel prompt tuning technique that employs
a hierarchical, multi-granularity prompt design. Our approach integrates remote
sensing ship priors through bias terms, learned from a small trainable network.
This strategy enhances the model's generalization capabilities while improving
its ability to discern intricate backgrounds and learn discriminative ship
features. Furthermore, we contribute to the field by introducing a
comprehensive dataset, FGSCM-52, significantly expanding existing datasets with
more extensive data and detailed annotations for less common ship classes.
Extensive experimental evaluations demonstrate the superiority of our proposed
method over current state-of-the-art techniques. The source code will be made
publicly available
Channel Exchanging Networks for Multimodal and Multitask Dense Image Prediction
Multimodal fusion and multitask learning are two vital topics in machine
learning. Despite the fruitful progress, existing methods for both problems are
still brittle to the same challenge -- it remains dilemmatic to integrate the
common information across modalities (resp. tasks) meanwhile preserving the
specific patterns of each modality (resp. task). Besides, while they are
actually closely related to each other, multimodal fusion and multitask
learning are rarely explored within the same methodological framework before.
In this paper, we propose Channel-Exchanging-Network (CEN) which is
self-adaptive, parameter-free, and more importantly, applicable for both
multimodal fusion and multitask learning. At its core, CEN dynamically
exchanges channels between subnetworks of different modalities. Specifically,
the channel exchanging process is self-guided by individual channel importance
that is measured by the magnitude of Batch-Normalization (BN) scaling factor
during training. For the application of dense image prediction, the validity of
CEN is tested by four different scenarios: multimodal fusion, cycle multimodal
fusion, multitask learning, and multimodal multitask learning. Extensive
experiments on semantic segmentation via RGB-D data and image translation
through multi-domain input verify the effectiveness of our CEN compared to
current state-of-the-art methods. Detailed ablation studies have also been
carried out, which provably affirm the advantage of each component we propose.Comment: 18 pages. arXiv admin note: substantial text overlap with
arXiv:2011.0500
- …
