419 research outputs found
Internet of robotic things : converging sensing/actuating, hypoconnectivity, artificial intelligence and IoT Platforms
The Internet of Things (IoT) concept is evolving rapidly and influencing newdevelopments in various application domains, such as the Internet of MobileThings (IoMT), Autonomous Internet of Things (A-IoT), Autonomous Systemof Things (ASoT), Internet of Autonomous Things (IoAT), Internetof Things Clouds (IoT-C) and the Internet of Robotic Things (IoRT) etc.that are progressing/advancing by using IoT technology. The IoT influencerepresents new development and deployment challenges in different areassuch as seamless platform integration, context based cognitive network integration,new mobile sensor/actuator network paradigms, things identification(addressing, naming in IoT) and dynamic things discoverability and manyothers. The IoRT represents new convergence challenges and their need to be addressed, in one side the programmability and the communication ofmultiple heterogeneous mobile/autonomous/robotic things for cooperating,their coordination, configuration, exchange of information, security, safetyand protection. Developments in IoT heterogeneous parallel processing/communication and dynamic systems based on parallelism and concurrencyrequire new ideas for integrating the intelligent “devices”, collaborativerobots (COBOTS), into IoT applications. Dynamic maintainability, selfhealing,self-repair of resources, changing resource state, (re-) configurationand context based IoT systems for service implementation and integrationwith IoT network service composition are of paramount importance whennew “cognitive devices” are becoming active participants in IoT applications.This chapter aims to be an overview of the IoRT concept, technologies,architectures and applications and to provide a comprehensive coverage offuture challenges, developments and applications
Bioinformatics and Medicine in the Era of Deep Learning
Many of the current scientific advances in the life sciences have their origin in the intensive use of data for knowledge discovery. In no area this is so clear as in bioinformatics, led by technological breakthroughs in data acquisition technologies. It has been argued that bioinformatics could quickly become the field of research generating the largest data repositories, beating other data-intensive areas such as high-energy physics or astroinformatics. Over the last decade, deep learning has become a disruptive advance in machine learning, giving new live to the long-standing connectionist paradigm in artificial intelligence. Deep learning methods are ideally suited to large-scale data and, therefore, they should be ideally suited to knowledge discovery in bioinformatics and biomedicine at large. In this brief paper, we review key aspects of the application of deep learning in bioinformatics and medicine, drawing from the themes covered by the contributions to an ESANN 2018 special session devoted to this topic
Robotic ubiquitous cognitive ecology for smart homes
Robotic ecologies are networks of heterogeneous robotic devices pervasively embedded in everyday environments, where they cooperate to perform complex tasks. While their potential makes them increasingly popular, one fundamental problem is how to make them both autonomous and adaptive, so as to reduce the amount of preparation, pre-programming and human supervision that they require in real world applications. The project RUBICON develops learning solutions which yield cheaper, adaptive and efficient coordination of robotic ecologies. The approach we pursue builds upon a unique combination of methods from cognitive robotics, machine learning, planning and agent- based control, and wireless sensor networks. This paper illustrates the innovations advanced by RUBICON in each of these fronts before describing how the resulting techniques have been integrated and applied to a smart home scenario. The resulting system is able to provide useful services and pro-actively assist the users in their activities. RUBICON learns through an incremental and progressive approach driven by the feed- back received from its own activities and from the user, while also self-organizing the manner in which it uses available sensors, actuators and other functional components in the process. This paper summarises some of the lessons learned by adopting such an approach and outlines promising directions for future work
Probabilistic learning on graphs via contextual architectures
We propose a novel methodology for representation learning on graph-structured data, in which a stack of Bayesian Networks learns different distributions of a vertex's neighbour- hood. Through an incremental construction policy and layer-wise training, we can build deeper architectures with respect to typical graph convolutional neural networks, with benefits in terms of context spreading between vertices. First, the model learns from graphs via maximum likelihood estimation without using target labels. Then, a supervised readout is applied to the learned graph embeddings to deal with graph classification and vertex classification tasks, showing competitive results against neural models for graphs. The computational complexity is linear in the number of edges, facilitating learning on large scale data sets. By studying how depth affects the performances of our model, we discover that a broader context generally improves performances. In turn, this leads to a critical analysis of some benchmarks used in literature
A cognitive robotic ecology approach to self-configuring and evolving AAL systems
Robotic ecologies are systems made out of several robotic devices, including mobile robots, wireless sensors and effectors embedded in everyday environments, where they cooperate to achieve complex tasks. This paper demonstrates how endowing robotic ecologies with information processing algorithms such as perception, learning, planning, and novelty detection can make these systems able to deliver modular, flexible, manageable and dependable Ambient Assisted Living (AAL) solutions. Specifically, we show how the integrated and self-organising cognitive solutions implemented within the EU project RUBICON (Robotic UBIquitous Cognitive Network) can reduce the need of costly pre-programming and maintenance of robotic ecologies. We illustrate how these solutions can be harnessed to (i) deliver a range of assistive services by coordinating the sensing & acting capabilities of heterogeneous devices, (ii) adapt and tune the overall behaviour of the ecology to the preferences and behaviour of its inhabitants, and also (iii) deal with novel events, due to the occurrence of new user's activities and changing user's habits
Extending OpenStack Monasca for Predictive Elasticity Control
Traditional auto-scaling approaches are conceived as reactive automations, typically triggered when predefined thresholds are breached by resource consumption metrics. Managing such rules at scale is cumbersome, especially when resources require non-negligible time to be instantiated. This paper introduces an architecture for predictive cloud operations, which enables orchestrators to apply time-series forecasting techniques to estimate the evolution of relevant metrics and take decisions based on the predicted state of the system. In this way, they can anticipate load peaks and trigger appropriate scaling actions in advance, such that new resources are available when needed. The proposed architecture is implemented in OpenStack, extending the monitoring capabilities of Monasca by injecting short-term forecasts of standard metrics. We use our architecture to implement predictive scaling policies leveraging on linear regression, autoregressive integrated moving average, feed-forward, and recurrent neural networks (RNN). Then, we evaluate their performance on a synthetic workload, comparing them to those of a traditional policy. To assess the ability of the different models to generalize to unseen patterns, we also evaluate them on traces from a real content delivery network (CDN) workload. In particular, the RNN model exhibites the best overall performance in terms of prediction error, observed client-side response latency, and forecasting overhead. The implementation of our architecture is open-source
Subtotal petrosectomy and cochlear implantation
SUMMARY Objective. The objective of this study is to analyse surgical outcomes in a series of patients who underwent subtotal petrosectomy in combination with cochlear implantation. Methods. Retrospective chart review. Thirty patients (32 ears) underwent subtotal petrosectomy and cochlear implantation in one stage. Indications for subtotal petrosectomy included the following: cholesteatoma, chronic otitis media, previous canal wall-down, osteoradionecrosis, revision surgery for clinical reasons, inner ear malformations, middle ear anatomical variations and severe cochlear ossification. Results. Follow-up ranged from 2 to 54 months. Only 2 complications related to the subtotal petrosectomy (1 subcutaneous abdominal haematoma and 1 subcutaneous abdominal seroma) occurred in this series. Complete electrode insertion was achieved in all but 4 cases. Conclusions. Subtotal petrosectomy is a safe procedure and can offer technical advantages in some cases of complex cochlear implantation
Predictive auto-scaling with OpenStack Monasca
Cloud auto-scaling mechanisms are typically based on reactive automation rules that scale a cluster whenever some metric, e.g., the average CPU usage among instances, exceeds a predefined threshold. Tuning these rules becomes particularly cumbersome when scaling-up a cluster involves non-negligible times to bootstrap new instances, as it happens frequently in production cloud services. To deal with this problem, we propose an architecture for auto-scaling cloud services based on the status in which the system is expected to evolve in the near future. Our approach leverages on time-series forecasting techniques, like those based on machine learning and artificial neural networks, to predict the future dynamics of key metrics, e.g., resource consumption metrics, and apply a threshold-based scaling policy on them. The result is a predictive automation policy that is able, for instance, to automatically anticipate peaks in the load of a cloud application and trigger ahead of time appropriate scaling actions to accommodate the expected increase in traffic. We prototyped our approach as an open-source OpenStack component, which relies on, and extends, the monitoring capabilities offered by Monasca, resulting in the addition of predictive metrics that can be leveraged by orchestration components like Heat or Senlin. We show experimental results using a recurrent neural network and a multi-layer perceptron as predictor, which are compared with a simple linear regression and a traditional non-predictive auto-scaling policy. However, the proposed framework allows for the easy customization of the prediction policy as needed
RRAML: Reinforced Retrieval Augmented Machine Learning
The emergence of large language models (LLMs) has revolutionized machine learning and related fields,
showcasing remarkable abilities in comprehending, generating, and manipulating human language.
However, their conventional usage through API-based text prompt submissions imposes certain limitations in terms of context constraints and external source availability. LLMs suffer from the problem of
hallucinating text, and in the last year, several approaches have been devised to overcome this issue:
adding an external Knowledge Base or an external memory consisting of embeddings stored and retrieved
by vector databases. In all the current approaches, though, the main issues are: (i) they need to access
an embedding model and then adapt it to the task they have to solve; (ii) in case they have to optimize
the embedding model, they need to have access to the parameters of the LLM, which in many cases are
"black boxes". To address these challenges, we propose a novel framework called Reinforced Retrieval
Augmented Machine Learning (RRAML). RRAML integrates the reasoning capabilities of LLMs with
supporting information retrieved by a purpose-built retriever from a vast user-provided database. By
leveraging recent advancements in reinforcement learning, our method effectively addresses several
critical challenges. Firstly, it circumvents the need for accessing LLM gradients. Secondly, our method
alleviates the burden of retraining LLMs for specific tasks, as it is often impractical or impossible due to
restricted access to the model and the computational intensity involved. Additionally, we seamlessly link
the retriever’s task with the reasoner, mitigating hallucinations and reducing irrelevant and potentially
damaging retrieved documents. We believe that the research agenda outlined in this paper has the
potential to profoundly impact the field of AI, democratizing access to and utilization of LLMs for a wide
range of entities
Bot and gender detection of twitter accounts using distortion and LSA notebook for PAN at CLEF 2019
In this work, we present our approach for the Author Profiling task of PAN 2019. The task is divided into two sub-problems, bot, and gender detection, for two different languages: English and Spanish. For each instance of the problem and each language, we address the problem differently. We use an ensemble architecture to solve the Bot Detection for accounts that write in English and a single SVM for those who write in Spanish. For the Gender detection we use a single SVM architecture for both the languages, but we pre-process the tweets in a different way. Our final models achieve accuracy over the 90% in the bot detection task, while for the gender detection, of 84.17% and 77.61% respectively for the English and Spanish languages
- …
