383 research outputs found

    Harnessing LSTM for Nonlinear Ship Deck Motion Prediction in UAV Autonomous Landing amidst High Sea States

    Full text link
    Autonomous landing of UAVs in high sea states requires the UAV to land exclusively during the ship deck's "rest period," coinciding with minimal movement. Given this scenario, determining the ship's "rest period" based on its movement patterns becomes a fundamental prerequisite for addressing this challenge. This study employs the Long Short-Term Memory (LSTM) neural network to predict the ship's motion across three dimensions: longi-tudinal, transverse, and vertical waves. In the absence of actual ship data under high sea states, this paper employs a composite sine wave model to simulate ship deck motion. Through this approach, a highly accurate model is established, exhibiting promising outcomes within various stochastic sine wave combination models.Comment: 11 pages, 7 figures, accept by ICANDVC202

    Scaling Data Diversity for Fine-Tuning Language Models in Human Alignment

    Full text link
    Alignment with human preference prevents large language models (LLMs) from generating misleading or toxic content while requiring high-cost human feedback. Assuming resources of human annotation are limited, there are two different ways of allocating considered: more diverse PROMPTS or more diverse RESPONSES to be labeled. Nonetheless, a straightforward comparison between their impact is absent. In this work, we first control the diversity of both sides according to the number of samples for fine-tuning, which can directly reflect their influence. We find that instead of numerous prompts, more responses but fewer prompts better trigger LLMs for human alignment. Additionally, the concept of diversity for prompts can be more complex than responses that are typically quantified by single digits. Consequently, a new formulation of prompt diversity is proposed, further implying a linear correlation with the final performance of LLMs after fine-tuning. We also leverage it on data augmentation and conduct experiments to show its effect on different algorithms.Comment: Accepted by LREC-COLING 202

    Automatic Figure Ranking and User Interfacing for Intelligent Figure Search

    Get PDF
    Figures are important experimental results that are typically reported in full-text bioscience articles. Bioscience researchers need to access figures to validate research facts and to formulate or to test novel research hypotheses. On the other hand, the sheer volume of bioscience literature has made it difficult to access figures. Therefore, we are developing an intelligent figure search engine (http://figuresearch.askhermes.org). Existing research in figure search treats each figure equally, but we introduce a novel concept of "figure ranking": figures appearing in a full-text biomedical article can be ranked by their contribution to the knowledge discovery.We empirically validated the hypothesis of figure ranking with over 100 bioscience researchers, and then developed unsupervised natural language processing (NLP) approaches to automatically rank figures. Evaluating on a collection of 202 full-text articles in which authors have ranked the figures based on importance, our best system achieved a weighted error rate of 0.2, which is significantly better than several other baseline systems we explored. We further explored a user interfacing application in which we built novel user interfaces (UIs) incorporating figure ranking, allowing bioscience researchers to efficiently access important figures. Our evaluation results show that 92% of the bioscience researchers prefer as the top two choices the user interfaces in which the most important figures are enlarged. With our automatic figure ranking NLP system, bioscience researchers preferred the UIs in which the most important figures were predicted by our NLP system than the UIs in which the most important figures were randomly assigned. In addition, our results show that there was no statistical difference in bioscience researchers' preference in the UIs generated by automatic figure ranking and UIs by human ranking annotation.The evaluation results conclude that automatic figure ranking and user interfacing as we reported in this study can be fully implemented in online publishing. The novel user interface integrated with the automatic figure ranking system provides a more efficient and robust way to access scientific information in the biomedical domain, which will further enhance our existing figure search engine to better facilitate accessing figures of interest for bioscientists

    API-Bank: A Comprehensive Benchmark for Tool-Augmented LLMs

    Full text link
    Recent research has demonstrated that Large Language Models (LLMs) can enhance their capabilities by utilizing external tools. However, three pivotal questions remain unanswered: (1) How effective are current LLMs in utilizing tools? (2) How can we enhance LLMs' ability to utilize tools? (3) What obstacles need to be overcome to leverage tools? To address these questions, we introduce API-Bank, a groundbreaking benchmark, specifically designed for tool-augmented LLMs. For the first question, we develop a runnable evaluation system consisting of 73 API tools. We annotate 314 tool-use dialogues with 753 API calls to assess the existing LLMs' capabilities in planning, retrieving, and calling APIs. For the second question, we construct a comprehensive training set containing 1,888 tool-use dialogues from 2,138 APIs spanning 1,000 distinct domains. Using this dataset, we train Lynx, a tool-augmented LLM initialized from Alpaca. Experimental results demonstrate that GPT-3.5 exhibits improved tool utilization compared to GPT-3, while GPT-4 excels in planning. However, there is still significant potential for further improvement. Moreover, Lynx surpasses Alpaca's tool utilization performance by more than 26 pts and approaches the effectiveness of GPT-3.5. Through error analysis, we highlight the key challenges for future research in this field to answer the third question.Comment: EMNLP 202

    Antagonistic Effects of Sphingomonas and Pseudomonas aeruginosa on 4 Kinds of Pathogenic Bacteria of Ginseng

    Get PDF
    [Objectives] To explore effective biocontrol methods for diseases in the process of ginseng cultivation, and develop an efficient and environmentally friendly biocontrol agent. [Methods] In this study, 2 strains were isolated from biogas slurry, and Cylindrocarpon destructans (XF), Fusarium solani (GF), Botrytis cinerea Pers (HM) and Alternaria panax Whetz (HB) were used as test materials. The strains were isolated and identified by dilution plate method, 16S rDNA sequence identification method, confrontation culture method, filter paper method and ultraviolet spectrophotometer method, and the bacteriostatic activity and bacteriostatic rate were tested. [Results] Strain 15 (Sphingomonas) and strain 19 (Pseudomonas aeruginosa) were screened out through identification and analysis, and they grew stably within 8-10 d. The bacteriostatic rates of strain 15 against A. panax and B. cinerea were 47.37% and 43.40%, respectively, and the bacteriostatic rates of strain 19 against A. panax and B. cinerea were 62.30% and 63.27%, respectively. The bacteriostatic activity of the extract of strain 19 increased with the increase of OD600 value, and the bacteriostatic effect was optimal when the OD600 value was in the range of 0.8-1, up to about 70%, so it had a strong biocontrol potential. [Conclusions] This experiment provides convenience for more effective inoculation, establishes a fast, simple and accurate method for the determination of the best bacteriostatic rate of P. aeruginosa culture solution to HM, and lays a foundation for large-scale culture of P. aeruginosa culture solution. Besides, it is expected to provide a theoretical basis for the efficient control of ginseng B. cinerea in field production, use it for the prevention and control of ginseng shoot diseases, and provide a reference for the efficient and diverse development of biocontrol agents for ginseng shoot diseases
    corecore