6,769 research outputs found
Structure of the saxiphilin:saxitoxin (STX) complex reveals a convergent molecular recognition strategy for paralytic toxins.
Dinoflagelates and cyanobacteria produce saxitoxin (STX), a lethal bis-guanidinium neurotoxin causing paralytic shellfish poisoning. A number of metazoans have soluble STX-binding proteins that may prevent STX intoxication. However, their STX molecular recognition mechanisms remain unknown. Here, we present structures of saxiphilin (Sxph), a bullfrog high-affinity STX-binding protein, alone and bound to STX. The structures reveal a novel high-affinity STX-binding site built from a "proto-pocket" on a transferrin scaffold that also bears thyroglobulin domain protease inhibitor repeats. Comparison of Sxph and voltage-gated sodium channel STX-binding sites reveals a convergent toxin recognition strategy comprising a largely rigid binding site where acidic side chains and a cation-π interaction engage STX. These studies reveal molecular rules for STX recognition, outline how a toxin-binding site can be built on a naïve scaffold, and open a path to developing protein sensors for environmental STX monitoring and new biologics for STX intoxication mitigation
Detect-and-Track: Efficient Pose Estimation in Videos
This paper addresses the problem of estimating and tracking human body
keypoints in complex, multi-person video. We propose an extremely lightweight
yet highly effective approach that builds upon the latest advancements in human
detection and video understanding. Our method operates in two-stages: keypoint
estimation in frames or short clips, followed by lightweight tracking to
generate keypoint predictions linked over the entire video. For frame-level
pose estimation we experiment with Mask R-CNN, as well as our own proposed 3D
extension of this model, which leverages temporal information over small clips
to generate more robust frame predictions. We conduct extensive ablative
experiments on the newly released multi-person video pose estimation benchmark,
PoseTrack, to validate various design choices of our model. Our approach
achieves an accuracy of 55.2% on the validation and 51.8% on the test set using
the Multi-Object Tracking Accuracy (MOTA) metric, and achieves state of the art
performance on the ICCV 2017 PoseTrack keypoint tracking challenge.Comment: In CVPR 2018. Ranked first in ICCV 2017 PoseTrack challenge (keypoint
tracking in videos). Code: https://github.com/facebookresearch/DetectAndTrack
and webpage: https://rohitgirdhar.github.io/DetectAndTrack
A Closer Look at Spatiotemporal Convolutions for Action Recognition
In this paper we discuss several forms of spatiotemporal convolutions for
video analysis and study their effects on action recognition. Our motivation
stems from the observation that 2D CNNs applied to individual frames of the
video have remained solid performers in action recognition. In this work we
empirically demonstrate the accuracy advantages of 3D CNNs over 2D CNNs within
the framework of residual learning. Furthermore, we show that factorizing the
3D convolutional filters into separate spatial and temporal components yields
significantly advantages in accuracy. Our empirical study leads to the design
of a new spatiotemporal convolutional block "R(2+1)D" which gives rise to CNNs
that achieve results comparable or superior to the state-of-the-art on
Sports-1M, Kinetics, UCF101 and HMDB51
- …
