288 research outputs found
Signal-induced Brd4 release from chromatin is essential for its role transition from chromatin targeting to transcriptional regulation
Bromodomain-containing protein Brd4 is shown to persistently associate with chromosomes during mitosis for transmitting epigenetic memory across cell divisions. During interphase, Brd4 also plays a key role in regulating the transcription of signal-inducible genes by recruiting positive transcription elongation factor b (P-TEFb) to promoters. How the chromatin-bound Brd4 transits into a transcriptional regulation mode in response to stimulation, however, is largely unknown. Here, by analyzing the dynamics of Brd4 during ultraviolet or hexamethylene bisacetamide treatment, we show that the signal-induced release of chromatin-bound Brd4 is essential for its functional transition. In untreated cells, almost all Brd4 is observed in association with interphase chromatin. Upon treatment, Brd4 is released from chromatin, mostly due to signal-triggered deacetylation of nucleosomal histone H4 at acetylated-lysine 5/8 (H4K5ac/K8ac). Through selective association with the transcriptional active form of P-TEFb that has been liberated from the inactive multi-subunit complex in response to treatment, the released Brd4 mediates the recruitment of this active P-TEFb to promoter, which enhances transcription at the stage of elongation. Thus, through signal-induced release from chromatin and selective association with the active form of P-TEFb, the chromatin-bound Brd4 switches its role to mediate the recruitment of P-TEFb for regulating the transcriptional elongation of signal-inducible genes.National Natural Science Foundation of China[30930046, 30670408, 81070307]; Natural Science Foundation of Fujian[C0210005, 2010J01231]; Science Planning Program of Fujian Province[2009J1010, 2010J1008]; National Foundation for fostering talents of basic science[J1030626
Research of piezoelectric acoustic liner
The piezoelectric acoustic liner is a new type of acoustic liner that uses piezoelectric patches to replace the traditional mechanical structure. Its working principle is to change the resonator volume of acoustic liner by inverse piezoelectric effect. In this paper, the finite element method is used to analyze the deformation of piezoelectric patches and the acoustic performance of piezoelectric acoustic liner, when the piezoelectric patch deformation is 0.1 mm, the noise elimination frequency band offset of the acoustic liner is about 30 Hz, and related experiments are designed. The experimental results confirm that noise elimination frequency range of piezoelectric acoustic liner is 1100 Hz to 1300 Hz within the voltage range of 0 V to 200 V
Sigma-1 Receptor Antagonist BD1047 Reduces Mechanical Allodynia in a Rat Model of Bone Cancer Pain through the Inhibition of Spinal NR1 Phosphorylation and Microglia Activation
Previous studies have demonstrated that sigma-1 receptor plays important roles in the induction phase of rodent neuropathic pain; however, whether it is involved in bone cancer pain (BCP) and the underlying mechanisms remain elusive. The aim of this study was to examine the potential role of the spinal sigma-1 receptor in the development of bone cancer pain. Walker 256 mammary gland carcinoma cells were implanted into the intramedullary space of the right tibia of Sprague-Dawley rats to induce ongoing bone cancer-related pain behaviors; our findings indicated that, on days 7, 10, 14, and 21 after operation, the expression of sigma-1 receptor in the spinal cord was higher in BCP rats compared to the sham rats. Furthermore, intrathecal injection of 120 nmol of sigma-1 receptor antagonist BD1047 on days 5, 6, and 7 after operation attenuated mechanical allodynia as well as the associated induction of c-Fos and activation of microglial cells, NR1, and the subsequent Ca2+-dependent signals of BCP rats. These results suggest that sigma-1 receptor is involved in the development of bone cancer pain and that targeting sigma-1 receptor may be a new strategy for the treatment of bone cancer pain
Tensor-to-Vector Regression for Multi-channel Speech Enhancement based on Tensor-Train Network
We propose a tensor-to-vector regression approach to multi-channel speech
enhancement in order to address the issue of input size explosion and
hidden-layer size expansion. The key idea is to cast the conventional deep
neural network (DNN) based vector-to-vector regression formulation under a
tensor-train network (TTN) framework. TTN is a recently emerged solution for
compact representation of deep models with fully connected hidden layers. Thus
TTN maintains DNN's expressive power yet involves a much smaller amount of
trainable parameters. Furthermore, TTN can handle a multi-dimensional tensor
input by design, which exactly matches the desired setting in multi-channel
speech enhancement. We first provide a theoretical extension from DNN to TTN
based regression. Next, we show that TTN can attain speech enhancement quality
comparable with that for DNN but with much fewer parameters, e.g., a reduction
from 27 million to only 5 million parameters is observed in a single-channel
scenario. TTN also improves PESQ over DNN from 2.86 to 2.96 by slightly
increasing the number of trainable parameters. Finally, in 8-channel
conditions, a PESQ of 3.12 is achieved using 20 million parameters for TTN,
whereas a DNN with 68 million parameters can only attain a PESQ of 3.06. Our
implementation is available online
https://github.com/uwjunqi/Tensor-Train-Neural-Network.Comment: Accepted to ICASSP 2020. Update reproducible cod
Exploring Deep Hybrid Tensor-to-Vector Network Architectures for Regression Based Speech Enhancement
This paper investigates different trade-offs between the number of model
parameters and enhanced speech qualities by employing several deep
tensor-to-vector regression models for speech enhancement. We find that a
hybrid architecture, namely CNN-TT, is capable of maintaining a good quality
performance with a reduced model parameter size. CNN-TT is composed of several
convolutional layers at the bottom for feature extraction to improve speech
quality and a tensor-train (TT) output layer on the top to reduce model
parameters. We first derive a new upper bound on the generalization power of
the convolutional neural network (CNN) based vector-to-vector regression
models. Then, we provide experimental evidence on the Edinburgh noisy speech
corpus to demonstrate that, in single-channel speech enhancement, CNN
outperforms DNN at the expense of a small increment of model sizes. Besides,
CNN-TT slightly outperforms the CNN counterpart by utilizing only 32\% of the
CNN model parameters. Besides, further performance improvement can be attained
if the number of CNN-TT parameters is increased to 44\% of the CNN model size.
Finally, our experiments of multi-channel speech enhancement on a simulated
noisy WSJ0 corpus demonstrate that our proposed hybrid CNN-TT architecture
achieves better results than both DNN and CNN models in terms of
better-enhanced speech qualities and smaller parameter sizes.Comment: Accepted to InterSpeech 202
A study on joint modeling and data augmentation of multi-modalities for audio-visual scene classification
In this paper, we propose two techniques, namely joint modeling and data
augmentation, to improve system performances for audio-visual scene
classification (AVSC). We employ pre-trained networks trained only on image
data sets to extract video embedding; whereas for audio embedding models, we
decide to train them from scratch. We explore different neural network
architectures for joint modeling to effectively combine the video and audio
modalities. Moreover, data augmentation strategies are investigated to increase
audio-visual training set size. For the video modality the effectiveness of
several operations in RandAugment is verified. An audio-video joint mixup
scheme is proposed to further improve AVSC performances. Evaluated on the
development set of TAU Urban Audio Visual Scenes 2021, our final system can
achieve the best accuracy of 94.2% among all single AVSC systems submitted to
DCASE 2021 Task 1b.Comment: 5 pages, 1 figure, submitted to INTERSPEECH 202
The Relation of Parental Emotion Regulation to Child Autism Spectrum Disorder Core Symptoms: The Moderating Role of Child Cardiac Vagal Activity
This study investigated the role of parental emotion regulation (ER) on children’s core symptoms in families of children with autism spectrum disorders (ASD) in middle childhood; the study also explored whether children’s physiological ER functioning served as a risk or protective factor with respect to parental relationships. Thirty-one Chinese children with ASD (age 6–11) and their primary caregivers participated in this study. Parental ER and child ASD symptoms were collected via questionnaires from parents. Child cardiac vagal activity (derived from heart rate variability) was measured at rest and during a parent-child interaction task. Using moderation analyses, the results showed that parental ER was not directly associated with children’s core ASD symptoms; rather, it interacted significantly with children’s resting cardiac vagal activity, but not task-related changes of cardiac vagal activity, to exert an impact on children’s core ASD symptoms. Specifically, our findings suggested that parents’ difficulties with their own ER significantly impacted their children’s core ASD symptoms only for the children who showed blunted resting cardiac vagal activity. Implications for the future measurement of ER in the family context and future directions for intervention are discussed
- …
