1,094 research outputs found

    Bosonic construction of CKP tau function

    Full text link
    The CKP tau function has been an important topic in mathematical physics. In this paper, the inverse of vacuum expectation value of exponential of certain bosonic fields, is showed to be the CKP tau function given by Chang and Wu, in the language of CKP Darboux transformation. In fact, computation of the above vacuum expectation value is usually quite difficult, since the square of bosonic fields is usually not zero. Here the corresponding vacuum expectation value is understood as successive application of CKP Darboux transformations, so that we can compute it by using the methods of integrable systems, where a useful formula is given. For applications, we construct solutions of KdV hierarchy by vacuum expectation value of bosonic fields, by the fact that KdV hierarchy is the 2-reduction of CKP hierarchy.Comment: 31 page

    Generalized Bigraded Toda Hierarchy

    Full text link
    Bigraded Toda hierarchy L1M(n)=L2N(n)L_1^M(n)=L_2^N(n) is generalized to L1M(n)=L2N(n)+jZi=1mqn(i)Λjrn+1(i)L_1^M(n)=L_2^{N}(n)+\sum_{j\in \mathbb Z}\sum_{i=1}^{m}q^{(i)}_n\Lambda^jr^{(i)}_{n+1}, which is the analogue of the famous constrained KP hierarchy Lk=(Lk)0+i=1mqi1riL^{k}= (L^{k})_{\geq0}+\sum_{i=1}^{m}q_{i}\partial^{-1}r_i. It is known that different bosonizations of fermionic KP hierarchy will give rise to different kinds of integrable hierarchies. Starting from the fermionic form of constrained KP hierarchy, bilinear equation of this generalized bigraded Toda hierarchy (GBTH) are derived by using 2--component boson--fermion correspondence. Next based upon this, the Lax structure of GBTH is obtained. Conversely, we also derive bilinear equation of GBTH from the corresponding Lax structure.Comment: 16 page

    Text Simplification Using Neural Machine Translation

    Get PDF
    Text simplification (TS) is the technique of reducing the lexical, syntactical complexity of text. Existing automatic TS systems can simplify text only by lexical simplification or by manually defined rules. Neural Machine Translation (NMT) is a recently proposed approach for Machine Translation (MT) that is receiving a lot of research interest. In this paper, we regard original English and simplified English as two languages, and apply a NMT model–Recurrent Neural Network (RNN) encoder-decoder on TS to make the neural network to learn text simplification rules by itself. Then we discuss challenges and strategies about how to apply a NMT model to the task of text simplification

    The Effect on Long-Chain Fatty Acids in Lucerne Silage with Jujube Powder and \u3cem\u3eLactobacillus plantarum\u3c/em\u3e

    Get PDF
    The major nutrients of lucerne silage are well documented. However, forages are also an important dietary source of α-linolenic acid (C18:3n-3) and linoleicacid (C18:2n-6) that are biohydrogenated in the rumen, originating a complex pattern of C18 fatty acids (Jenkins et al. 2008). Studies have reported slight effects on the fatty acid (FA) composition of grass silages by the use of additives like formalin, formic acid, or enzymes (Alves et al. 2011) However, there are no studies on the addition of jujube powder in lucerne silage, which has a high sugar content. The effect of Lactobacillus plantarum (LA) on the silage fermentation quality has been frequently observed. Few studies have focussed on long-chain fatty acids in lucerne silage with jujube powder and Lactobacillus plantarum. The objective of this study was to evaluate the effect of the addition of jujube powder and the Lactobacillus plantarum on the long-chain fatty acids (mainly C16-C18) in lucerne silage

    A Lactic Acid Bacterium Isolated from Grass in Native Grassland in Northern China

    Get PDF
    The epiphytic LAB converts sugar into lactic acid during the ensiling process. As a result, the pH is reduced, and the forage is preserved. Therefore, further study of epiphytic LAB species is required, especially the screening of excellent LAB. However, to our knowledge, limited information is available on the epiphytic microflora on grass in native grassland. The present study set out to screen, isolate and identify the LAB from grass silages made in native grass-land in northern China

    Toda Darboux transformations and vacuum expectation values

    Full text link
    Determinant formulas for vacuum expectation values s+k+nm,seH(t)βmβ1βnβ1gk\langle s+k+n-m,-s|e^{H(\mathbf{t})}β_m^{*}\cdotsβ_1^{*}β_n\cdotsβ_1g|k\rangle are given by using Toda Darboux transformations. Firstly notice that 2--Toda hierarchy can be viewed as the 2--component bosonizations of fermionic KP hierarchy, then two elementary Toda Darboux transformation operators T+(q)=Λ(q)Δq1T_{+}(q)=Λ(q)\cdotΔ\cdot q^{-1} and T(r)=Λ1(r)1Δ1rT_{-}(r)=Λ^{-1}(r)^{-1}\cdotΔ^{-1}\cdot r are constructed from the changes of Toda (adjoint) wave functions by using 2--component boson--fermion correspondence. Based on this, the above vacuum expectation values now can be realized as the successive applications of Toda Darboux transformations. So the corresponding determinant formulas can be derived from the determinant representations of Toda Darboux transformations. Finally by similar methods, we also give the determinant formulas for nmeH(x)βmβ1βnβ1gk\langle n-m|e^{\mathcal{H}(\mathbf{x})}β_m^{*}\cdotsβ_1^{*}β_n\cdotsβ_1g|k\rangle related with KP tau functions.18 page

    X2^2-VLM: All-In-One Pre-trained Model For Vision-Language Tasks

    Full text link
    Vision language pre-training aims to learn alignments between vision and language from a large amount of data. We proposed multi-grained vision language pre-training, a unified approach which can learn vision language alignments in multiple granularity. This paper advances the proposed method by unifying image and video encoding in one model and scaling up the model with large-scale data. We present X2^2-VLM, a pre-trained VLM with a modular architecture for both image-text tasks and video-text tasks. Experiment results show that X2^2-VLM performs the best on base and large scale for both image-text and video-text tasks, making a good trade-off between performance and model scale. Moreover, we show that the modular design of X2^2-VLM results in high transferability for X2^2-VLM to be utilized in any language or domain. For example, by simply replacing the text encoder with XLM-R, X2^2-VLM outperforms state-of-the-art multilingual multi-modal pre-trained models without any multilingual pre-training. The code and pre-trained models will be available at github.com/zengyan-97/X2-VLM.Comment: 21 pages, 8 figure
    corecore