148 research outputs found
Interaction and Signalling Networks:a report from the fourth 'Young Microbiologists Symposium on Microbe Signalling, Organisation and Pathogenesis'
At the end of June, over 120 microbiologists from 18 countries gathered in Dundee, Scotland for the fourth edition of the Young Microbiologists Symposium on ‘Microbe Signalling, Organisation and Pathogenesis’. The aim of the symposium was to give early career microbiologists the opportunity to present their work in a convivial environment and to interact with senior world-renowned scientists in exciting fields of microbiology research. The meeting was supported by the Microbiology Society, the Society of Applied Microbiology and the American Society for Microbiology with further sponsorship from the European Molecular Biology Organisation and the Royal Society of Edinburgh. In this report, we highlight some themes that emerged from the many interesting talks and poster presentations, as well as some of the other activities that were on offer at this energetic meeting
Smart Cars: how the Internet of Things is enabling new business models in the automotive industry
LAUREA MAGISTRALEIn questo elaborato, analizzerò in profondità come la macchina si sta trasformando attraverso la tecnologia, e quali saranno le conseguenze economiche di questa trasformazione per i molti attori del settore auto in tutto il mondo - le case automobilistiche, i fornitori, aziende tecnologiche e sviluppatori di software, gestori del traffico, operatori di rete mobile, agenzie assicurative, meccanici, e gli utenti finali.
Prima di tutto, sarà utile riunire tutto in un unico contesto, e guardare le mutazioni del mercato e i cambiamenti strutturali che stanno alla base dello sviluppo attuale e futuro dell'auto connessa e del veicolo autonomo.
Infine, spiegherò la mia idea di un modello di business futuro, basato sulla visione di un unico sistema che lavora in armonia, invece di svariate entità separate concorrenti e che condividono pochi dati.
Cercherò di dimostrare che non è necessario investire impetuosamente nella Smart Car o tecnologie di guida autonoma, ma occorre investire in modo più ponderato: riconoscere quali sono i punti di forza dell'azienda che si adattano con le nuove tecnologie, e come sviluppare capacità per differenziare l'azienda e distinguersi nel nuovo ambiente tecnologico.
I veri vincitori, in questo mercato in continua evoluzione, saranno quei player, che collaboreranno tra di loro in modo trasversale a più industrie per generare nuove fonti di reddito. Con l'adozione di questa strategia, le aziende automobilistiche possono migliorare la loro redditività, lavorando con vari collaboratori cross-industry per creare un’offerta unica, combinata, comune, di interesse per i consumatori. Questo approccio offrirà anche nuove strade per raggiungere nuovi potenziali clienti e promuovere i rapporti attraverso il mondo digitale.In this report, I will take an in-depth look at how the car is being transformed through technology, and what the economic consequences of that transformation will be for the many stakeholders in the auto industry around the globe - the auto manufacturers, suppliers, technology and software companies, fleet operators, mobile network operators, insurance agencies, mechanics, the final users, and others.
First of all, I will put it all in context, and look at the market shifts and structural changes that are underpinning the current and future development of the connected car and autonomous vehicle.
Finally, I will explain my idea about a future business model, about a single system working harmoniously, rather than many separated entities competing and exchanging only few data.
I will try to show that it is not necessary to pump more investment into the Smart Car or autonomous driving technologies, but to invest more thoughtfully: to recognize where your company strengths fit with the new technologies, and how to build the capabilities to differentiate the company and stand out in the new technological environment.
The real winners, in this evolving market, will be those players, which collaborate with each other across industry lines to create new sources of revenue. By adopting this strategy, auto companies can enhance their profitability, working with various cross-industry collaborators to create shared merchandising packages of interest to consumers. This approach also will offer new avenues for reaching potential customers and foster relationships via the digital space
Leveraging over depth in egocentric activity recognition
Activity recognition from first person videos is a growing research area. The increasing diffusion of egocentric sensors in various devices makes it timely to develop approaches able to recognize fine grained first person actions like picking up, putting down, pouring and so forth. While most of previous work focused on RGB data, some authors pointed out the importance of leveraging over depth information in this domain. In this paper
we follow this trend and we propose the first deep architecture that uses depth maps as an attention mechanism for first person activity recognition. Specifically, we blend together the RGB and depth data, so to obtain an enriched input for the network. This blending puts more or less emphasis on different parts of the image based on their distance from the observer, hence acting as an attention mechanism. To further strengthen the proposed
activity recognition protocol, we opt for a self labeling approach.
This, combined with a Conv-LSTM block for extracting temporal information from the various frames, leads to the new state of the art on two publicly available benchmark databases. An ablation study completes our experimental findings, confirming the effectiveness of our approac
Self-Supervised Joint Encoding of Motion and Appearance for First Person Action Recognition
Wearable cameras are becoming more and more popular in several applications, increasing the interest of the research community in developing approaches for recognizing actions from the first-person point of view. An open challenge in egocentric action recognition is that videos lack detailed information about the main actor's pose and thus tend to record only parts of the movement when focusing on manipulation tasks. Thus, the amount of information about the action itself is limited, making crucial the understanding of the manipulated objects and their context. Many previous works addressed this issue with two-stream architectures, where one stream is dedicated to modeling the appearance of objects involved in the action, and another to extracting motion features from optical flow. In this paper, we argue that learning features jointly from these two information channels is beneficial to capture the spatio-temporal correlations between the two better. To this end, we propose a single stream architecture able to do so, thanks to the addition of a self-supervised block that uses a pretext motion prediction task to intertwine motion and appearance knowledge. Experiments on several publicly available databases show the power of our approach
Une iconographie revisitée: Saint Augustin entre le Christ et la Vierge de Rubens
Il contributo affronta il tema della diffusione di un'insolita iconografia originata in ambito eremitano. Sant'Agostino, posto tra la Vergine che allatta e Cristo crocifisso, è protagonista di una riflessione, talvolta sciolta dai versi in latino dei cartigli a corredo: nel contemplare l'Incarnazione e la Redenzione, è impossibile compiere qualsiasi scelta. Tra le versioni del soggetto, particolare attenzione è riservata al capolavoro di Rubens della Real Academia de Bellas Artes de San Fernando di Madrid
Domain generalization through audio-visual relative norm alignment in first person action recognition
First person action recognition is becoming an increasingly researched area thanks to the rising popularity of wearable cameras. This is bringing to light cross-domain issues that are yet to be addressed in this context. Indeed, the information extracted from learned representations suffers from an intrinsic "environmental bias". This strongly affects the ability to generalize to unseen scenarios, limiting the application of current methods to real settings where labeled data are not available during training. In this work, we introduce the first domain generalization approach for egocentric activity recognition, by proposing a new audiovisual loss, called Relative Norm Alignment loss. It rebalances the contributions from the two modalities during training, over different domains, by aligning their feature norm representations. Our approach leads to strong results in domain generalization on both EPIC-Kitchens-55 and EPIC-Kitchens-100, as demonstrated by extensive experiments, and can be extended to work also on domain adaptation settings with competitive results
PoliTO-IIT Submission to the EPIC-KITCHENS-100 Unsupervised Domain Adaptation Challenge for Action Recognition
In this report, we describe the technical details of our submission to the
EPIC-Kitchens-100 Unsupervised Domain Adaptation (UDA) Challenge in Action
Recognition. To tackle the domain-shift which exists under the UDA setting, we
first exploited a recent Domain Generalization (DG) technique, called Relative
Norm Alignment (RNA). It consists in designing a model able to generalize well
to any unseen domain, regardless of the possibility to access target data at
training time. Then, in a second phase, we extended the approach to work on
unlabelled target data, allowing the model to adapt to the target distribution
in an unsupervised fashion. For this purpose, we included in our framework
existing UDA algorithms, such as Temporal Attentive Adversarial Adaptation
Network (TA3N), jointly with new multi-stream consistency losses, namely
Temporal Hard Norm Alignment (T-HNA) and Min-Entropy Consistency (MEC). Our
submission (entry 'plnet') is visible on the leaderboard and it achieved the
1st position for 'verb', and the 3rd position for both 'noun' and 'action'.Comment: 3rd place in the 2021 EPIC-KITCHENS-100 Unsupervised Domain
Adaptation Challenge for Action Recognitio
Bringing Online Egocentric Action Recognition into the wild
To enable a safe and effective human-robot cooperation, it is crucial to
develop models for the identification of human activities. Egocentric vision
seems to be a viable solution to solve this problem, and therefore many works
provide deep learning solutions to infer human actions from first person
videos. However, although very promising, most of these do not consider the
major challenges that comes with a realistic deployment, such as the
portability of the model, the need for real-time inference, and the robustness
with respect to the novel domains (i.e., new spaces, users, tasks). With this
paper, we set the boundaries that egocentric vision models should consider for
realistic applications, defining a novel setting of egocentric action
recognition in the wild, which encourages researchers to develop novel,
applications-aware solutions. We also present a new model-agnostic technique
that enables the rapid repurposing of existing architectures in this new
context, demonstrating the feasibility to deploy a model on a tiny device
(Jetson Nano) and to perform the task directly on the edge with very low energy
consumption (2.4W on average at 50 fps)
- …
