1,088 research outputs found
Teleporting a quantum state in a subset of the whole Hilbert space
We investigate the lower bound of the amount of entanglement for faithfully
teleporting a quantum state belonging to a subset of the whole Hilbert space.
Moreover, when the quantum state belongs to a set composed of two states, a
probabilistic teleportation scheme is presented using a non-maximally entangled
state as the quantum channel. We also calculate the average transmission
efficiency of this scheme.Comment: 4 pages, no figur
Embedding Representation of Academic Heterogeneous Information Networks Based on Federated Learning
Academic networks in the real world can usually be portrayed as heterogeneous
information networks (HINs) with multi-type, universally connected nodes and
multi-relationships. Some existing studies for the representation learning of
homogeneous information networks cannot be applicable to heterogeneous
information networks because of the lack of ability to issue heterogeneity. At
the same time, data has become a factor of production, playing an increasingly
important role. Due to the closeness and blocking of businesses among different
enterprises, there is a serious phenomenon of data islands. To solve the above
challenges, aiming at the data information of scientific research teams closely
related to science and technology, we proposed an academic heterogeneous
information network embedding representation learning method based on federated
learning (FedAHE), which utilizes node attention and meta path attention
mechanism to learn low-dimensional, dense and real-valued vector
representations while preserving the rich topological information and
meta-path-based semantic information of nodes in network. Moreover, we combined
federated learning with the representation learning of HINs composed of
scientific research teams and put forward a federal training mechanism based on
dynamic weighted aggregation of parameters (FedDWA) to optimize the node
embeddings of HINs. Through sufficient experiments, the efficiency, accuracy
and feasibility of our proposed framework are demonstrated
Unsupervised Semantic Representation Learning of Scientific Literature Based on Graph Attention Mechanism and Maximum Mutual Information
Since most scientific literature data are unlabeled, this makes unsupervised
graph-based semantic representation learning crucial. Therefore, an
unsupervised semantic representation learning method of scientific literature
based on graph attention mechanism and maximum mutual information (GAMMI) is
proposed. By introducing a graph attention mechanism, the weighted summation of
nearby node features make the weights of adjacent node features entirely depend
on the node features. Depending on the features of the nearby nodes, different
weights can be applied to each node in the graph. Therefore, the correlations
between vertex features can be better integrated into the model. In addition,
an unsupervised graph contrastive learning strategy is proposed to solve the
problem of being unlabeled and scalable on large-scale graphs. By comparing the
mutual information between the positive and negative local node representations
on the latent space and the global graph representation, the graph neural
network can capture both local and global information. Experimental results
demonstrate competitive performance on various node classification benchmarks,
achieving good results and sometimes even surpassing the performance of
supervised learning
Dynamic Self-adaptive Multiscale Distillation from Pre-trained Multimodal Large Model for Efficient Cross-modal Representation Learning
In recent years, pre-trained multimodal large models have attracted
widespread attention due to their outstanding performance in various multimodal
applications. Nonetheless, the extensive computational resources and vast
datasets required for their training present significant hurdles for deployment
in environments with limited computational resources. To address this
challenge, we propose a novel dynamic self-adaptive multiscale distillation
from pre-trained multimodal large model for efficient cross-modal
representation learning for the first time. Unlike existing distillation
methods, our strategy employs a multiscale perspective, enabling the extraction
structural knowledge across from the pre-trained multimodal large model.
Ensuring that the student model inherits a comprehensive and nuanced
understanding of the teacher knowledge. To optimize each distillation loss in a
balanced and efficient manner, we propose a dynamic self-adaptive distillation
loss balancer, a novel component eliminating the need for manual loss weight
adjustments and dynamically balances each loss item during the distillation
process. Our methodology streamlines pre-trained multimodal large models using
only their output features and original image-level information, requiring
minimal computational resources. This efficient approach is suited for various
applications and allows the deployment of advanced multimodal technologies even
in resource-limited settings. Extensive experiments has demonstrated that our
method maintains high performance while significantly reducing model complexity
and training costs. Moreover, our distilled student model utilizes only
image-level information to achieve state-of-the-art performance on cross-modal
retrieval tasks, surpassing previous methods that relied on region-level
information.Comment: 10 page
- …
