462 research outputs found

    Research on the Construction of Ecological Teaching Mode of Oral English for Professional Degree Postgraduates

    Get PDF
    The construction of postgraduate oral English classroom has always been the focus and difficulty of postgraduate teaching. For a long time, the postgraduate oral English teaching has followed the college English general oral teaching curriculum system, which is out of touch with the needs of the society and postgraduates for English. There is no curriculum system for the characteristics and training objectives of postgraduates, and it cannot meet the requirements of cultivating postgraduates’ diversified oral communication skills. This paper studies and explores the ecological teaching mode of oral English for engineering masters from the perspective of educational ecological theory, and focuses on the importance of the concept of “supply” in the ecological teaching of graduate students’ oral English. Improve the level of oral English to meet the requirements of postgraduate academic and daily oral communication

    Asymptotic profiles for Choquard equations with general critical nonlinearities

    Full text link
    In this paper, we study asymptotic behavior of positive ground state solutions for the nonlinear Choquard equation: \begin{equation}\label{0.1} -\Delta u+\varepsilon u=\big(I_{\alpha}\ast F(u)\big)F'(u),\quad u\in H^1(\mathbb R^N), \end{equation} where F(u)=uN+αN2+G(u)F(u)=|u|^{\frac{N+\alpha}{N-2}}+G(u), N3N\geq3 is an integer, IαI_{\alpha} is the Riesz potential of order α(0,N)\alpha\in(0,N), and ε>0\varepsilon>0 is a parameter. Under some mild subcritical growth assumptions on G(u)G(u), we show that as ε\varepsilon \to \infty, the ground state solutions of \eqref{0.1}, after a suitable rescaling, converge to a particular solution of the critical Choquard equation Δu=N+αN2(IαuN+αN2)uN+αN22u-\Delta u=\frac{N+\alpha}{N-2}(I_{\alpha}*|u|^{\frac{N+\alpha}{N-2}})|u|^{\frac{N+\alpha}{N-2}-2}u. We establish a novel sharp asymptotic characterisation of such a rescaling, which depends in a non-trivial way on the asymptotic behavior of G(u)G(u) at infinity and the space dimension N=3N=3, N=4N=4 or N5N\geq5.Comment: 46pages, 0figure. arXiv admin note: text overlap with arXiv:2302.13727, arXiv:2405.0287

    Attributes-Guided and Pure-Visual Attention Alignment for Few-Shot Recognition

    Full text link
    The purpose of few-shot recognition is to recognize novel categories with a limited number of labeled examples in each class. To encourage learning from a supplementary view, recent approaches have introduced auxiliary semantic modalities into effective metric-learning frameworks that aim to learn a feature similarity between training samples (support set) and test samples (query set). However, these approaches only augment the representations of samples with available semantics while ignoring the query set, which loses the potential for the improvement and may lead to a shift between the modalities combination and the pure-visual representation. In this paper, we devise an attributes-guided attention module (AGAM) to utilize human-annotated attributes and learn more discriminative features. This plug-and-play module enables visual contents and corresponding attributes to collectively focus on important channels and regions for the support set. And the feature selection is also achieved for query set with only visual information while the attributes are not available. Therefore, representations from both sets are improved in a fine-grained manner. Moreover, an attention alignment mechanism is proposed to distill knowledge from the guidance of attributes to the pure-visual branch for samples without attributes. Extensive experiments and analysis show that our proposed module can significantly improve simple metric-based approaches to achieve state-of-the-art performance on different datasets and settings.Comment: An expanded version of the same-name paper accepted by AAAI-202

    Beyond Reward: Offline Preference-guided Policy Optimization

    Full text link
    This study focuses on the topic of offline preference-based reinforcement learning (PbRL), a variant of conventional reinforcement learning that dispenses with the need for online interaction or specification of reward functions. Instead, the agent is provided with fixed offline trajectories and human preferences between pairs of trajectories to extract the dynamics and task information, respectively. Since the dynamics and task information are orthogonal, a naive approach would involve using preference-based reward learning followed by an off-the-shelf offline RL algorithm. However, this requires the separate learning of a scalar reward function, which is assumed to be an information bottleneck of the learning process. To address this issue, we propose the offline preference-guided policy optimization (OPPO) paradigm, which models offline trajectories and preferences in a one-step process, eliminating the need for separately learning a reward function. OPPO achieves this by introducing an offline hindsight information matching objective for optimizing a contextual policy and a preference modeling objective for finding the optimal context. OPPO further integrates a well-performing decision policy by optimizing the two objectives iteratively. Our empirical results demonstrate that OPPO effectively models offline preferences and outperforms prior competing baselines, including offline RL algorithms performed over either true or pseudo reward function specifications. Our code is available on the project website: https://sites.google.com/view/oppo-icml-2023

    VGDiffZero: Text-to-image Diffusion Models Can Be Zero-shot Visual Grounders

    Full text link
    Large-scale text-to-image diffusion models have shown impressive capabilities across various generative tasks, enabled by strong vision-language alignment obtained through pre-training. However, most vision-language discriminative tasks require extensive fine-tuning on carefully-labeled datasets to acquire such alignment, with great cost in time and computing resources. In this work, we explore directly applying a pre-trained generative diffusion model to the challenging discriminative task of visual grounding without any fine-tuning and additional training dataset. Specifically, we propose VGDiffZero, a simple yet effective zero-shot visual grounding framework based on text-to-image diffusion models. We also design a comprehensive region-scoring method considering both global and local contexts of each isolated proposal. Extensive experiments on RefCOCO, RefCOCO+, and RefCOCOg show that VGDiffZero achieves strong performance on zero-shot visual grounding

    Design from Policies: Conservative Test-Time Adaptation for Offline Policy Optimization

    Full text link
    In this work, we decouple the iterative bi-level offline RL (value estimation and policy extraction) from the offline training phase, forming a non-iterative bi-level paradigm and avoiding the iterative error propagation over two levels. Specifically, this non-iterative paradigm allows us to conduct inner-level optimization (value estimation) in training, while performing outer-level optimization (policy extraction) in testing. Naturally, such a paradigm raises three core questions that are not fully answered by prior non-iterative offline RL counterparts like reward-conditioned policy: (q1) What information should we transfer from the inner-level to the outer-level? (q2) What should we pay attention to when exploiting the transferred information for safe/confident outer-level optimization? (q3) What are the benefits of concurrently conducting outer-level optimization during testing? Motivated by model-based optimization (MBO), we propose DROP (design from policies), which fully answers the above questions. Specifically, in the inner-level, DROP decomposes offline data into multiple subsets, and learns an MBO score model (a1). To keep safe exploitation to the score model in the outer-level, we explicitly learn a behavior embedding and introduce a conservative regularization (a2). During testing, we show that DROP permits deployment adaptation, enabling an adaptive inference across states (a3). Empirically, we evaluate DROP on various tasks, showing that DROP gains comparable or better performance compared to prior methods.Comment: NeurIPS 202

    Genetically encoded libraries and spider venoms as emerging sources for crop protective peptides

    Get PDF
    Agricultural crops are targeted by various pathogens (fungi, bacteria, and viruses) and pests (herbivorous arthropods). Antimicrobial and insecticidal peptides are increasingly recognized as eco-friendly tools for crop protection due to their low propensity for resistance development and the fact that they are fully biodegradable. However, historical challenges have hindered their development, including poor stability, limited availability, reproducibility issues, high production costs, and unwanted toxicity. Toxicity is a primary concern because crop-protective peptides interact with various organisms of environmental and economic significance. This review focuses on the potential of genetically encoded peptide libraries like the use of two-hybrid-based methods for antimicrobial peptides identification and insecticidal spider venom peptides as two main approaches for targeting plant pathogens and pests. We discuss some key findings and challenges regarding the practical application of each strategy. We conclude that genetically encoded peptide library- and spider venom-derived crop protective peptides offer a sustainable and environmentally responsible approach for addressing modern crop protection needs in the agricultural sector

    CEIL: Generalized Contextual Imitation Learning

    Full text link
    In this paper, we present \textbf{C}ont\textbf{E}xtual \textbf{I}mitation \textbf{L}earning~(CEIL), a general and broadly applicable algorithm for imitation learning (IL). Inspired by the formulation of hindsight information matching, we derive CEIL by explicitly learning a hindsight embedding function together with a contextual policy using the hindsight embeddings. To achieve the expert matching objective for IL, we advocate for optimizing a contextual variable such that it biases the contextual policy towards mimicking expert behaviors. Beyond the typical learning from demonstrations (LfD) setting, CEIL is a generalist that can be effectively applied to multiple settings including: 1)~learning from observations (LfO), 2)~offline IL, 3)~cross-domain IL (mismatched experts), and 4) one-shot IL settings. Empirically, we evaluate CEIL on the popular MuJoCo tasks (online) and the D4RL dataset (offline). Compared to prior state-of-the-art baselines, we show that CEIL is more sample-efficient in most online IL tasks and achieves better or competitive performances in offline tasks.Comment: NeurIPS 202
    corecore