416 research outputs found

    Loss-of-function mutations in Lysyl-tRNA synthetase cause various leukoencephalopathy phenotypes

    Get PDF
    Objective: To expand the clinical spectrum of lysyl-tRNA synthetase (KARS) gene–related diseases, which so far includes Charcot-Marie-Tooth disease, congenital visual impairment and microcephaly, and nonsyndromic hearing impairment. Methods: Whole-exome sequencing was performed on index patients from 4 unrelated families with leukoencephalopathy. Candidate pathogenic variants and their cosegregation were confirmed by Sanger sequencing. Effects of mutations on KARS protein function were examined by aminoacylation assays and yeast complementation assays. Results: Common clinical features of the patients in this study included impaired cognitive ability, seizure, hypotonia, ataxia, and abnormal brain imaging, suggesting that the CNS involvement is the main clinical presentation. Six previously unreported and 1 known KARS mutations were identified and cosegregated in these families. Two patients are compound heterozygous for missense mutations, 1 patient is homozygous for a missense mutation, and 1 patient harbored an insertion mutation and a missense mutation. Functional and structural analyses revealed that these mutations impair aminoacylation activity of lysyl-tRNA synthetase, indicating that de- fective KARS function is responsible for the phenotypes in these individuals. Conclusions: Our results demonstrate that patients with loss-of-function KARS mutations can manifest CNS disorders, thus broadening the phenotypic spectrum associated with KARS-related disease

    FFN: a Fine-grained Chinese-English Financial Domain Parallel Corpus

    Full text link
    Large Language Models (LLMs) have stunningly advanced the field of machine translation, though their effectiveness within the financial domain remains largely underexplored. To probe this issue, we constructed a fine-grained Chinese-English parallel corpus of financial news called FFN. We acquired financial news articles spanning between January 1st, 2014, to December 31, 2023, from mainstream media websites such as CNN, FOX, and China Daily. The dataset consists of 1,013 main text and 809 titles, all of which have been manually corrected. We measured the translation quality of two LLMs -- ChatGPT and ERNIE-bot, utilizing BLEU, TER and chrF scores as the evaluation metrics. For comparison, we also trained an OpenNMT model based on our dataset. We detail problems of LLMs and provide in-depth analysis, intending to stimulate further research and solutions in this largely uncharted territory. Our research underlines the need to optimize LLMs within the specific field of financial translation to ensure accuracy and quality.Comment: a simplified version of this paper is accepted by International Conference on Asian Language Processing 202

    Content Creator versus Brand Advertiser? The Effect of Inserting Advertisements in Videos on Influencers Engagement

    Get PDF
    Influencer advertising has become an indispensable component of online marketing due to the exponential growth of social influencers and their influence. Whereas the effectiveness of using influencer endorsements is well studied from the brand or company perspective, how the commercial endorsements affect influencers themselves is an important yet unrevealed question. We empirically examine the instantaneous (measured using live comment sentiment) and longer-term (measured using video feedback and follower number change) influence of inserting advertisements in videos on influencers’ reputation. We further investigate how this effect can be moderated when influencers demonstrate stronger endorsement by showing their faces during advertisements. Our result suggests that inserting advertisements have a negative impact on both instantaneous and longer-term viewer engagement; advertisements with influencers’ face showing moderate the negative effect of advertisements on viewers’ instantaneous response, while the different impact between advertisements with/out influencers showing their faces is not significant in the longer term

    Content Creator versus Brand Advertiser? The Effect of Inserting Advertisements in Videos on Influencers Engagement

    Get PDF
    Influencer advertising has become an indispensable component of online marketing due to the exponential growth of social influencers and their influence. Whereas the effectiveness of using influencer endorsements is well studied from the brand or company perspective, how the commercial endorsements affect influencers themselves is an important yet unrevealed question. We empirically examine the instantaneous (measured using live comment sentiment) and longer-term (measured using video feedback and follower number change) influence of inserting advertisements in videos on influencers’ reputation. We further investigate how this effect can be moderated when influencers demonstrate stronger endorsement by showing their faces during advertisements. Our result suggests that inserting advertisements have a negative impact on both instantaneous and longer-term viewer engagement; advertisements with influencers’ face showing moderate the negative effect of advertisements on viewers’ instantaneous response, while the different impact between advertisements with/out influencers showing their faces is not significant in the longer term

    LogoStyleFool: Vitiating Video Recognition Systems via Logo Style Transfer

    Full text link
    Video recognition systems are vulnerable to adversarial examples. Recent studies show that style transfer-based and patch-based unrestricted perturbations can effectively improve attack efficiency. These attacks, however, face two main challenges: 1) Adding large stylized perturbations to all pixels reduces the naturalness of the video and such perturbations can be easily detected. 2) Patch-based video attacks are not extensible to targeted attacks due to the limited search space of reinforcement learning that has been widely used in video attacks recently. In this paper, we focus on the video black-box setting and propose a novel attack framework named LogoStyleFool by adding a stylized logo to the clean video. We separate the attack into three stages: style reference selection, reinforcement-learning-based logo style transfer, and perturbation optimization. We solve the first challenge by scaling down the perturbation range to a regional logo, while the second challenge is addressed by complementing an optimization stage after reinforcement learning. Experimental results substantiate the overall superiority of LogoStyleFool over three state-of-the-art patch-based attacks in terms of attack performance and semantic preservation. Meanwhile, LogoStyleFool still maintains its performance against two existing patch-based defense methods. We believe that our research is beneficial in increasing the attention of the security community to such subregional style transfer attacks.Comment: 14 pages, 3 figures. Accepted to AAAI 202

    An Empirical Study of Parameter Efficient Fine-tuning on Vision-Language Pre-train Model

    Full text link
    Recent studies applied Parameter Efficient Fine-Tuning techniques (PEFTs) to efficiently narrow the performance gap between pre-training and downstream. There are two important factors for various PEFTs, namely, the accessible data size and fine-tunable parameter size. A natural expectation for PEFTs is that the performance of various PEFTs is positively related to the data size and fine-tunable parameter size. However, according to the evaluation of five PEFTs on two downstream vision-language (VL) tasks, we find that such an intuition holds only if the downstream data and task are not consistent with pre-training. For downstream fine-tuning consistent with pre-training, data size no longer affects the performance, while the influence of fine-tunable parameter size is not monotonous. We believe such an observation could guide the choice of training strategy for various PEFTs.Comment: Accepted by ICME202

    MIRReS: Multi-bounce Inverse Rendering using Reservoir Sampling

    Full text link
    We present MIRReS, a novel two-stage inverse rendering framework that jointly reconstructs and optimizes the explicit geometry, material, and lighting from multi-view images. Unlike previous methods that rely on implicit irradiance fields or simplified path tracing algorithms, our method extracts an explicit geometry (triangular mesh) in stage one, and introduces a more realistic physically-based inverse rendering model that utilizes multi-bounce path tracing and Monte Carlo integration. By leveraging multi-bounce path tracing, our method effectively estimates indirect illumination, including self-shadowing and internal reflections, which improves the intrinsic decomposition of shape, material, and lighting. Moreover, we incorporate reservoir sampling into our framework to address the noise in Monte Carlo integration, enhancing convergence and facilitating gradient-based optimization with low sample counts. Through qualitative and quantitative evaluation of several scenarios, especially in challenging scenarios with complex shadows, we demonstrate that our method achieves state-of-the-art performance on decomposition results. Additionally, our optimized explicit geometry enables applications such as scene editing, relighting, and material editing with modern graphics engines or CAD software. The source code is available at https://brabbitdousha.github.io/MIRReS/Comment: 16 pages, 14 figure

    Can Language Models Pretend Solvers? Logic Code Simulation with LLMs

    Full text link
    Transformer-based large language models (LLMs) have demonstrated significant potential in addressing logic problems. capitalizing on the great capabilities of LLMs for code-related activities, several frameworks leveraging logical solvers for logic reasoning have been proposed recently. While existing research predominantly focuses on viewing LLMs as natural language logic solvers or translators, their roles as logic code interpreters and executors have received limited attention. This study delves into a novel aspect, namely logic code simulation, which forces LLMs to emulate logical solvers in predicting the results of logical programs. To further investigate this novel task, we formulate our three research questions: Can LLMs efficiently simulate the outputs of logic codes? What strength arises along with logic code simulation? And what pitfalls? To address these inquiries, we curate three novel datasets tailored for the logic code simulation task and undertake thorough experiments to establish the baseline performance of LLMs in code simulation. Subsequently, we introduce a pioneering LLM-based code simulation technique, Dual Chains of Logic (DCoL). This technique advocates a dual-path thinking approach for LLMs, which has demonstrated state-of-the-art performance compared to other LLM prompt strategies, achieving a notable improvement in accuracy by 7.06% with GPT-4-Turbo.Comment: 12 pages, 8 figure

    Effects of Exponential Gaussian Distribution on (Double Sampling) Randomized Smoothing

    Full text link
    Randomized Smoothing (RS) is currently a scalable certified defense method providing robustness certification against adversarial examples. Although significant progress has been achieved in providing defenses against p\ell_p adversaries, the interaction between the smoothing distribution and the robustness certification still remains vague. In this work, we comprehensively study the effect of two families of distributions, named Exponential Standard Gaussian (ESG) and Exponential General Gaussian (EGG) distributions, on Randomized Smoothing and Double Sampling Randomized Smoothing (DSRS). We derive an analytic formula for ESG's certified radius, which converges to the origin formula of RS as the dimension dd increases. Additionally, we prove that EGG can provide tighter constant factors than DSRS in providing Ω(d)\Omega(\sqrt{d}) lower bounds of 2\ell_2 certified radius, and thus further addresses the curse of dimensionality in RS. Our experiments on real-world datasets confirm our theoretical analysis of the ESG distributions, that they provide almost the same certification under different exponents η\eta for both RS and DSRS. In addition, EGG brings a significant improvement to the DSRS certification, but the mechanism can be different when the classifier properties are different. Compared to the primitive DSRS, the increase in certified accuracy provided by EGG is prominent, up to 6.4% on ImageNet.Comment: ICML 2024 Poste

    LocalStyleFool: Regional Video Style Transfer Attack Using Segment Anything Model

    Full text link
    Previous work has shown that well-crafted adversarial perturbations can threaten the security of video recognition systems. Attackers can invade such models with a low query budget when the perturbations are semantic-invariant, such as StyleFool. Despite the query efficiency, the naturalness of the minutia areas still requires amelioration, since StyleFool leverages style transfer to all pixels in each frame. To close the gap, we propose LocalStyleFool, an improved black-box video adversarial attack that superimposes regional style-transfer-based perturbations on videos. Benefiting from the popularity and scalably usability of Segment Anything Model (SAM), we first extract different regions according to semantic information and then track them through the video stream to maintain the temporal consistency. Then, we add style-transfer-based perturbations to several regions selected based on the associative criterion of transfer-based gradient information and regional area. Perturbation fine adjustment is followed to make stylized videos adversarial. We demonstrate that LocalStyleFool can improve both intra-frame and inter-frame naturalness through a human-assessed survey, while maintaining competitive fooling rate and query efficiency. Successful experiments on the high-resolution dataset also showcase that scrupulous segmentation of SAM helps to improve the scalability of adversarial attacks under high-resolution data.Comment: Accepted to 2024 IEEE Security and Privacy Workshops (SPW
    corecore