1,970 research outputs found

    Follicular regulatory T cells repress cytokine production by follicular helper T cells and optimize IgG responses in mice

    Get PDF
    Follicular helper T (Tfh) cells provide crucial help to germinal center B (GCB) cells for proper antibody production, and a specialized subset of regulatory T cells, follicular regulatory T (Tfr) cells, modulate this process. However, Tfr-cell function in the GC is not well understood. Here, we define Tfr cells as a CD4(+) Foxp3(+) CXCR5(hi) PD-1(hi) CD25(low) TIGIT(high) T-cell population. Furthermore, we have used a novel mouse model ("Bcl6FC") to delete the Bcl6 gene in Foxp3(+) T cells and thus specifically deplete Tfr cells. Following immunization, Bcl6FC mice develop normal Tfh- and GCB-cell populations. However, Bcl6FC mice produce altered antigen-specific antibody responses, with reduced titers of IgG and significantly increased IgA. Bcl6FC mice also developed IgG antibodies with significantly decreased avidity to antigen in an HIV-1 gp120 "prime-boost" vaccine model. In an autoimmune lupus model, we observed strongly elevated anti-DNA IgA titers in Bcl6FC mice. Additionally, Tfh cells from Bcl6FC mice consistently produce higher levels of Interferon-γ, IL-10 and IL-21. Loss of Tfr cells therefore leads to highly abnormal Tfh-cell and GCB-cell responses. Overall, our study has uncovered unique regulatory roles for Tfr cells in the GC response

    On a conjecture of Ghorpade, Datta and Beelen for the number of points of varities over finite fields

    Full text link
    Consider a finite field Fq\mathbb{F}_q and positive integers d,m,rd,m,r with 1r(m+dd)1\leq r\leq \binom{m+d}{d}. Let Sd(m)S_d(m) be the Fq\mathbb{F}_q vector space of all homogeneous polynomials of degree dd in X0,,XmX_0,\dots,X_m. Let er(d,m)e_r(d,m) be the maximum number of Fq\mathbb{F}_q-rational points in the vanishing set of WW as WW varies through all subspaces of Sd(m)S_d(m) of dimension rr. Ghorpade, Datta and Beelen had conjectured an exact formula of er(d,m)e_r(d,m) when qd+1q\geq d+1. We prove that their conjectured formula is true when qq is sufficiently large in terms of m,d,rm,d,r. The problem of determining er(d,m)e_r(d,m) is equivalent to the problem of computing the rthr^{th} generalized hamming weights of projective the Reed Muller code PRMq(d,m)PRM_q(d,m). It is also equivalent to the problem of determining the maximum number of points on sections of Veronese varieties by linear subvarieties of codimension rr

    Backward Imitation and Forward Reinforcement Learning via Bi-directional Model Rollouts

    Full text link
    Traditional model-based reinforcement learning (RL) methods generate forward rollout traces using the learnt dynamics model to reduce interactions with the real environment. The recent model-based RL method considers the way to learn a backward model that specifies the conditional probability of the previous state given the previous action and the current state to additionally generate backward rollout trajectories. However, in this type of model-based method, the samples derived from backward rollouts and those from forward rollouts are simply aggregated together to optimize the policy via the model-free RL algorithm, which may decrease both the sample efficiency and the convergence rate. This is because such an approach ignores the fact that backward rollout traces are often generated starting from some high-value states and are certainly more instructive for the agent to improve the behavior. In this paper, we propose the backward imitation and forward reinforcement learning (BIFRL) framework where the agent treats backward rollout traces as expert demonstrations for the imitation of excellent behaviors, and then collects forward rollout transitions for policy reinforcement. Consequently, BIFRL empowers the agent to both reach to and explore from high-value states in a more efficient manner, and further reduces the real interactions, making it potentially more suitable for real-robot learning. Moreover, a value-regularized generative adversarial network is introduced to augment the valuable states which are infrequently received by the agent. Theoretically, we provide the condition where BIFRL is superior to the baseline methods. Experimentally, we demonstrate that BIFRL acquires the better sample efficiency and produces the competitive asymptotic performance on various MuJoCo locomotion tasks compared against state-of-the-art model-based methods.Comment: Accepted by IROS202
    corecore