206 research outputs found

    Multiple-relaxation-time lattice Boltzmann model for compressible fluids

    Full text link
    We present an energy-conserving multiple-relaxation-time finite difference lattice Boltzmann model for compressible flows. This model is based on a 16-discrete-velocity model. The collision step is first calculated in the moment space and then mapped back to the velocity space. The moment space and corresponding transformation matrix are constructed according to the group representation theory. Equilibria of the nonconserved moments are chosen according to the need of recovering compressible Navier-Stokes equations through the Chapman-Enskog expansion. Numerical experiments showed that compressible flows with strong shocks can be well simulated by the present model. The used benchmark tests include (i) shock tubes, such as the Sod, Lax, Sjogreen, Colella explosion wave and collision of two strong shocks, (ii) regular and Mach shock reflections, and (iii) shock wave reaction on cylindrical bubble problems. The new model works for both low and high speeds compressible flows. It contains more physical information and has better numerical stability and accuracy than its single-relaxation-time version.Comment: 11 figures, Revte

    EMO: Episodic Memory Optimization for Few-Shot Meta-Learning

    Full text link
    Few-shot meta-learning presents a challenge for gradient descent optimization due to the limited number of training samples per task. To address this issue, we propose an episodic memory optimization for meta-learning, we call \emph{EMO}, which is inspired by the human ability to recall past learning experiences from the brain's memory. EMO retains the gradient history of past experienced tasks in external memory, enabling few-shot learning in a memory-augmented way. By learning to retain and recall the learning process of past training tasks, EMO nudges parameter updates in the right direction, even when the gradients provided by a limited number of examples are uninformative. We prove theoretically that our algorithm converges for smooth, strongly convex objectives. EMO is generic, flexible, and model-agnostic, making it a simple plug-and-play optimizer that can be seamlessly embedded into existing optimization-based few-shot meta-learning approaches. Empirical results show that EMO scales well with most few-shot classification benchmarks and improves the performance of optimization-based meta-learning methods, resulting in accelerated convergence.Comment: Accepted by CoLLAs 202

    Multi-Label Meta Weighting for Long-Tailed Dynamic Scene Graph Generation

    Full text link
    This paper investigates the problem of scene graph generation in videos with the aim of capturing semantic relations between subjects and objects in the form of \langlesubject, predicate, object\rangle triplets. Recognizing the predicate between subject and object pairs is imbalanced and multi-label in nature, ranging from ubiquitous interactions such as spatial relationships (\eg \emph{in front of}) to rare interactions such as \emph{twisting}. In widely-used benchmarks such as Action Genome and VidOR, the imbalance ratio between the most and least frequent predicates reaches 3,218 and 3,408, respectively, surpassing even benchmarks specifically designed for long-tailed recognition. Due to the long-tailed distributions and label co-occurrences, recent state-of-the-art methods predominantly focus on the most frequently occurring predicate classes, ignoring those in the long tail. In this paper, we analyze the limitations of current approaches for scene graph generation in videos and identify a one-to-one correspondence between predicate frequency and recall performance. To make the step towards unbiased scene graph generation in videos, we introduce a multi-label meta-learning framework to deal with the biased predicate distribution. Our meta-learning framework learns a meta-weight network for each training sample over all possible label losses. We evaluate our approach on the Action Genome and VidOR benchmarks by building upon two current state-of-the-art methods for each benchmark. The experiments demonstrate that the multi-label meta-weight network improves the performance for predicates in the long tail without compromising performance for head classes, resulting in better overall performance and favorable generalizability. Code: \url{https://github.com/shanshuo/ML-MWN}.Comment: ICMR 202

    Training-Free Semantic Segmentation via LLM-Supervision

    Full text link
    Recent advancements in open vocabulary models, like CLIP, have notably advanced zero-shot classification and segmentation by utilizing natural language for class-specific embeddings. However, most research has focused on improving model accuracy through prompt engineering, prompt learning, or fine-tuning with limited labeled data, thereby overlooking the importance of refining the class descriptors. This paper introduces a new approach to text-supervised semantic segmentation using supervision by a large language model (LLM) that does not require extra training. Our method starts from an LLM, like GPT-3, to generate a detailed set of subclasses for more accurate class representation. We then employ an advanced text-supervised semantic segmentation model to apply the generated subclasses as target labels, resulting in diverse segmentation results tailored to each subclass's unique characteristics. Additionally, we propose an assembly that merges the segmentation maps from the various subclass descriptors to ensure a more comprehensive representation of the different aspects in the test images. Through comprehensive experiments on three standard benchmarks, our method outperforms traditional text-supervised semantic segmentation methods by a marked margin.Comment: 22 pages,10 figures, conferenc

    Learning to Learn Variational Semantic Memory

    Get PDF
    In this paper, we introduce variational semantic memory into meta-learning to acquire long-term knowledge for few-shot learning. The variational semantic memory accrues and stores semantic information for the probabilistic inference of class prototypes in a hierarchical Bayesian framework. The semantic memory is grown from scratch and gradually consolidated by absorbing information from tasks it experiences. By doing so, it is able to accumulate long-term, general knowledge that enables it to learn new concepts of objects. We formulate memory recall as the variational inference of a latent memory variable from addressed contents, which offers a principled way to adapt the knowledge to individual tasks. Our variational semantic memory, as a new long-term memory module, confers principled recall and update mechanisms that enable semantic information to be efficiently accrued and adapted for few-shot learning. Experiments demonstrate that the probabilistic modelling of prototypes achieves a more informative representation of object classes compared to deterministic vectors. The consistent new state-of-the-art performance on four benchmarks shows the benefit of variational semantic memory in boosting few-shot recognition.Comment: accepted to NeurIPS 2020; code is available in https://github.com/YDU-uva/VS

    Learning to Learn Kernels with Variational Random Features

    Get PDF
    In this work, we introduce kernels with random Fourier features in the meta-learning framework to leverage their strong few-shot learning ability. We propose meta variational random features (MetaVRF) to learn adaptive kernels for the base-learner, which is developed in a latent variable model by treating the random feature basis as the latent variable. We formulate the optimization of MetaVRF as a variational inference problem by deriving an evidence lower bound under the meta-learning framework. To incorporate shared knowledge from related tasks, we propose a context inference of the posterior, which is established by an LSTM architecture. The LSTM-based inference network can effectively integrate the context information of previous tasks with task-specific information, generating informative and adaptive features. The learned MetaVRF can produce kernels of high representational power with a relatively low spectral sampling rate and also enables fast adaptation to new tasks. Experimental results on a variety of few-shot regression and classification tasks demonstrate that MetaVRF delivers much better, or at least competitive, performance compared to existing meta-learning alternatives.Comment: ICML'2020; code is available in: https://github.com/Yingjun-Du/MetaVR

    Wildfires enhance phytoplankton production in tropical oceans

    Get PDF
    Wildfire magnitude and frequency have greatly escalated on a global scale. Wildfire products rich in biogenic elements can enter the ocean through atmospheric and river inputs, but their contribution to marine phytoplankton production is poorly understood. Here, using geochemical paleo-reconstructions, a century-long relationship between wildfire magnitude and marine phytoplankton production is established in a fire-prone region of Kimberley coast, Australia. A positive correlation is identified between wildfire and phytoplankton production on a decadal scale. The importance of wildfire on marine phytoplankton production is statistically higher than that of tropical cyclones and rainfall, when strong El Niño Southern Oscillation coincides with the positive phase of Indian Ocean Dipole. Interdecadal chlorophyll-a variation along the Kimberley coast validates the spatial connection of this phenomenon. Findings from this study suggest that the role of additional nutrients from wildfires has to be considered when projecting impacts of global warming on marine phytoplankton production

    Wildfires enhance phytoplankton production in tropical oceans

    Get PDF
    Unidad de excelencia María de Maeztu CEX2019-000940-MWildfire magnitude and frequency have greatly escalated on a global scale. Wildfire products rich in biogenic elements can enter the ocean through atmospheric and river inputs, but their contribution to marine phytoplankton production is poorly understood. Here, using geochemical paleo-reconstructions, a century-long relationship between wildfire magnitude and marine phytoplankton production is established in a fire-prone region of Kimberley coast, Australia. A positive correlation is identified between wildfire and phytoplankton production on a decadal scale. The importance of wildfire on marine phytoplankton production is statistically higher than that of tropical cyclones and rainfall, when strong El Niño Southern Oscillation coincides with the positive phase of Indian Ocean Dipole. Interdecadal chlorophyll-a variation along the Kimberley coast validates the spatial connection of this phenomenon. Findings from this study suggest that the role of additional nutrients from wildfires has to be considered when projecting impacts of global warming on marine phytoplankton production
    corecore