55,567 research outputs found
Progressive Label Distillation: Learning Input-Efficient Deep Neural Networks
Much of the focus in the area of knowledge distillation has been on
distilling knowledge from a larger teacher network to a smaller student
network. However, there has been little research on how the concept of
distillation can be leveraged to distill the knowledge encapsulated in the
training data itself into a reduced form. In this study, we explore the concept
of progressive label distillation, where we leverage a series of
teacher-student network pairs to progressively generate distilled training data
for learning deep neural networks with greatly reduced input dimensions. To
investigate the efficacy of the proposed progressive label distillation
approach, we experimented with learning a deep limited vocabulary speech
recognition network based on generated 500ms input utterances distilled
progressively from 1000ms source training data, and demonstrated a significant
increase in test accuracy of almost 78% compared to direct learning.Comment: 9 page
Quantum measurement in two-dimensional conformal field theories: Application to quantum energy teleportation
We construct a set of quasi-local measurement operators in 2D CFT, and then
use them to proceed the quantum energy teleportation (QET) protocol and show it
is viable. These measurement operators are constructed out of the projectors
constructed from shadow operators, but further acting on the product of two
spatially separated primary fields. They are equivalently the OPE blocks in the
large central charge limit up to some UV-cutoff dependent normalization but the
associated probabilities of outcomes are UV-cutoff independent. We then adopt
these quantum measurement operators to show that the QET protocol is viable in
general. We also check the CHSH inequality a la OPE blocks.Comment: match the version published on PLB, the main conclusion didn't
change, some techincal details can be found in the previous versio
Glider: A GPU Library Driver for Improved System Security
Legacy device drivers implement both device resource management and
isolation. This results in a large code base with a wide high-level interface
making the driver vulnerable to security attacks. This is particularly
problematic for increasingly popular accelerators like GPUs that have large,
complex drivers. We solve this problem with library drivers, a new driver
architecture. A library driver implements resource management as an untrusted
library in the application process address space, and implements isolation as a
kernel module that is smaller and has a narrower lower-level interface (i.e.,
closer to hardware) than a legacy driver. We articulate a set of device and
platform hardware properties that are required to retrofit a legacy driver into
a library driver. To demonstrate the feasibility and superiority of library
drivers, we present Glider, a library driver implementation for two GPUs of
popular brands, Radeon and Intel. Glider reduces the TCB size and attack
surface by about 35% and 84% respectively for a Radeon HD 6450 GPU and by about
38% and 90% respectively for an Intel Ivy Bridge GPU. Moreover, it incurs no
performance cost. Indeed, Glider outperforms a legacy driver for applications
requiring intensive interactions with the device driver, such as applications
using the OpenGL immediate mode API
- …
