519 research outputs found
Event Transformer+. A multi-purpose solution for efficient event data processing
Event cameras record sparse illumination changes with high temporal
resolution and high dynamic range. Thanks to their sparse recording and low
consumption, they are increasingly used in applications such as AR/VR and
autonomous driving. Current top-performing methods often ignore specific
event-data properties, leading to the development of generic but
computationally expensive algorithms, while event-aware methods do not perform
as well. We propose Event Transformer+, that improves our seminal work evtprev
EvT with a refined patch-based event representation and a more robust backbone
to achieve more accurate results, while still benefiting from event-data
sparsity to increase its efficiency. Additionally, we show how our system can
work with different data modalities and propose specific output heads, for
event-stream predictions (i.e. action recognition) and per-pixel predictions
(dense depth estimation). Evaluation results show better performance to the
state-of-the-art while requiring minimal computation resources, both on GPU and
CPU
Performance of object recognition in wearable videos
Wearable technologies are enabling plenty of new applications of computer
vision, from life logging to health assistance. Many of them are required to
recognize the elements of interest in the scene captured by the camera. This
work studies the problem of object detection and localization on videos
captured by this type of camera. Wearable videos are a much more challenging
scenario for object detection than standard images or even another type of
videos, due to lower quality images (e.g. poor focus) or high clutter and
occlusion common in wearable recordings. Existing work typically focuses on
detecting the objects of focus or those being manipulated by the user wearing
the camera. We perform a more general evaluation of the task of object
detection in this type of video, because numerous applications, such as
marketing studies, also need detecting objects which are not in focus by the
user. This work presents a thorough study of the well known YOLO architecture,
that offers an excellent trade-off between accuracy and speed, for the
particular case of object detection in wearable video. We focus our study on
the public ADL Dataset, but we also use additional public data for
complementary evaluations. We run an exhaustive set of experiments with
different variations of the original architecture and its training strategy.
Our experiments drive to several conclusions about the most promising
directions for our goal and point us to further research steps to improve
detection in wearable videos.Comment: Emerging Technologies and Factory Automation, ETFA, 201
Event Transformer. A sparse-aware solution for efficient event data processing
Event cameras are sensors of great interest for many applications that run in low-resource and challenging environments. They log sparse illumination changes with high temporal resolution and high dynamic range, while they present minimal power consumption. However, top-performing methods often ignore specific event-data properties, leading to the development of generic but computationally expensive algorithms. Efforts toward efficient solutions usually do not achieve top-accuracy results for complex tasks. This work proposes a novel framework, Event Transformer (EvT) 1 , that effectively takes advantage of event-data properties to be highly efficient and accurate. We introduce a new patch-based event representation and a compact transformer-like architecture to process it. EvT is evaluated on different event-based benchmarks for action and gesture recognition. Evaluation results show better or comparable accuracy to the state-of-the-art while requiring significantly less computation resources, which makes EvT able to work with minimal latency both on GPU and CPU
Semi-Supervised Semantic Segmentation with Pixel-Level Contrastive Learning from a Class-wise Memory Bank
This work presents a novel approach for semi-supervised semantic segmentation. The key element of this approach is our contrastive learning module that enforces the segmentation network to yield similar pixel-level feature representations for same-class samples across the whole dataset. To achieve this, we maintain a memory bank continuously updated with relevant and high-quality feature vectors from labeled data. In an end-to-end training, the features from both labeled and unlabeled data are optimized to be similar to same-class samples from the memory bank. Our approach outperforms the current state-of-the-art for semi-supervised semantic segmentation and semi-supervised domain adaptation on well-known public benchmarks, with larger improvements on the most challenging scenarios, i.e., less available labeled data
Foregut Cystic Malformations in the Pancreas. Are Definitions Clearly Established?
Context Foregut cystic malformations are common lesions in the mediastinum but are rarely found in subdiaphragmatic locations. Only a few cases have been described within the pancreas where they can easily be misdiagnosed as cystic neoplasms. Case report We herein present the case of a 37-year-old female with acute cholangitis in whom a diagnostic work-up revealed a 1 cm solid-cystic heterogeneous lesion located at the head of the pancreas. The patient underwent a pancreaticoduodenectomy. Pathological evaluation demonstrated a cystic cavity lined by pseudostratified tall columnar ciliated epithelium with goblet cells, but lacking cartilage or smooth muscle bundles. Thus, the final diagnosis of the lesion was a ciliated foregut cyst of the pancreas. Conclusions A review of the cases published regarding these lesions shows great variability in the taxonomy and a lack of accuracy in the definitions of each different subtype. An easy to use algorithm for the diagnosis of foregut cystic malformations subtypes, based on epithelial lining and wall features, is presented.Image: Diagnostic algorithm of the different foregut cystic malformations
- …
