326 research outputs found
The Morphospace of Consciousness
We construct a complexity-based morphospace to study systems-level properties
of conscious & intelligent systems. The axes of this space label 3 complexity
types: autonomous, cognitive & social. Given recent proposals to synthesize
consciousness, a generic complexity-based conceptualization provides a useful
framework for identifying defining features of conscious & synthetic systems.
Based on current clinical scales of consciousness that measure cognitive
awareness and wakefulness, we take a perspective on how contemporary
artificially intelligent machines & synthetically engineered life forms measure
on these scales. It turns out that awareness & wakefulness can be associated to
computational & autonomous complexity respectively. Subsequently, building on
insights from cognitive robotics, we examine the function that consciousness
serves, & argue the role of consciousness as an evolutionary game-theoretic
strategy. This makes the case for a third type of complexity for describing
consciousness: social complexity. Having identified these complexity types,
allows for a representation of both, biological & synthetic systems in a common
morphospace. A consequence of this classification is a taxonomy of possible
conscious machines. We identify four types of consciousness, based on
embodiment: (i) biological consciousness, (ii) synthetic consciousness, (iii)
group consciousness (resulting from group interactions), & (iv) simulated
consciousness (embodied by virtual agents within a simulated reality). This
taxonomy helps in the investigation of comparative signatures of consciousness
across domains, in order to highlight design principles necessary to engineer
conscious machines. This is particularly relevant in the light of recent
developments at the crossroads of cognitive neuroscience, biomedical
engineering, artificial intelligence & biomimetics.Comment: 23 pages, 3 figure
The relation between cardiac 123I-mIBG scintigraphy and functional response 1 year after CRT implantation
Cardiac resynchronization therapy (CRT) is a disease-modifying therapy in patients with chronic heart failure (CHF). Current guidelines ascribe CRT eligibility on three parameters only: left ventricular ejection fraction (LVEF), QRS duration, and New York Heart Association (NYHA) functional class. However, one-third of CHF patients does not benefit from CRT. This study evaluated whether 123I-meta-iodobenzylguanidine (123I-mIBG) assessed cardiac sympathetic activity could optimize CRT patient selection
Embodied Artificial Intelligence through Distributed Adaptive Control: An Integrated Framework
In this paper, we argue that the future of Artificial Intelligence research
resides in two keywords: integration and embodiment. We support this claim by
analyzing the recent advances of the field. Regarding integration, we note that
the most impactful recent contributions have been made possible through the
integration of recent Machine Learning methods (based in particular on Deep
Learning and Recurrent Neural Networks) with more traditional ones (e.g.
Monte-Carlo tree search, goal babbling exploration or addressable memory
systems). Regarding embodiment, we note that the traditional benchmark tasks
(e.g. visual classification or board games) are becoming obsolete as
state-of-the-art learning algorithms approach or even surpass human performance
in most of them, having recently encouraged the development of first-person 3D
game platforms embedding realistic physics. Building upon this analysis, we
first propose an embodied cognitive architecture integrating heterogenous
sub-fields of Artificial Intelligence into a unified framework. We demonstrate
the utility of our approach by showing how major contributions of the field can
be expressed within the proposed framework. We then claim that benchmarking
environments need to reproduce ecologically-valid conditions for bootstrapping
the acquisition of increasingly complex cognitive skills through the concept of
a cognitive arms race between embodied agents.Comment: Updated version of the paper accepted to the ICDL-Epirob 2017
conference (Lisbon, Portugal
Growing-up hand in hand with robots: Designing and evaluating child-robot interaction from a developmental perspective
Robots are becoming part of children's care, entertainment, education, social assistance and therapy. A steadily growing body of Human-Robot Interaction (HRI) research shows that child-robot interaction (CRI) holds promises to support children's development in novel ways. However, research has shown that technologies that do not take into account children's needs, abilities, interests, and developmental characteristics may have a limited or even negative impact on their physical, cognitive, social, emotional, and moral development. As a result, robotic technology that aims to support children via means of social interaction has to take the developmental perspective into consideration. With this workshop (the third of a series of workshops focusing CRI research), we aim to bring together researchers to discuss how a developmental perspective play a role for smart and natural interaction between robots and children. We invite participants to share their experiences on the challenges of taking the developmental perspective in CRI, such as long-term sustained interactions in the wild, involving children and other stakeholders in the design process and more. Looking across disciplinary boundaries, we hope to stimulate thought-provoking discussions on epistemology, methods, approaches, techniques, interaction scenarios and design principles focused on supporting children's development through interaction with robotic technology. Our goal does not only focus on the conception and formulation of the outcomes in the context of the workshop venue, but also on their establishment and availability for the HRI community in different forms
Reinforcement learning or active inference?
This paper questions the need for reinforcement learning or control theory when optimising behaviour. We show that it is fairly simple to teach an agent complicated and adaptive behaviours using a free-energy formulation of perception. In this formulation, agents adjust their internal states and sampling of the environment to minimize their free-energy. Such agents learn causal structure in the environment and sample it in an adaptive and self-supervised fashion. This results in behavioural policies that reproduce those optimised by reinforcement learning and dynamic programming. Critically, we do not need to invoke the notion of reward, value or utility. We illustrate these points by solving a benchmark problem in dynamic programming; namely the mountain-car problem, using active perception or inference under the free-energy principle. The ensuing proof-of-concept may be important because the free-energy formulation furnishes a unified account of both action and perception and may speak to a reappraisal of the role of dopamine in the brain
Feed-Forward Chains of Recurrent Attractor Neural Networks Near Saturation
We perform a stationary state replica analysis for a layered network of Ising
spin neurons, with recurrent Hebbian interactions within each layer, in
combination with strictly feed-forward Hebbian interactions between successive
layers. This model interpolates between the fully recurrent and symmetric
attractor network studied by Amit el al, and the strictly feed-forward
attractor network studied by Domany et al. Due to the absence of detailed
balance, it is as yet solvable only in the zero temperature limit. The built-in
competition between two qualitatively different modes of operation,
feed-forward (ergodic within layers) versus recurrent (non- ergodic within
layers), is found to induce interesting phase transitions.Comment: 14 pages LaTex with 4 postscript figures submitted to J. Phys.
Towards a synthetic tutor assistant: The EASEL project and its architecture
Robots are gradually but steadily being introduced in our daily lives. A paramount application is that of education, where robots can assume the role of a tutor, a peer or simply a tool to help learners in a specific knowledge domain. Such endeavor posits specific challenges: affective social behavior, proper modelling of the learner’s progress, discrimination of the learner’s utterances, expressions and mental states, which, in turn, require an integrated architecture combining perception, cognition and action. In this paper we present an attempt to improve the current state of robots in the educational domain by introducing the EASEL EU project. Specifically, we introduce the EASEL’s unified robot architecture, an innovative Synthetic Tutor Assistant (STA) whose goal is to interactively guide learners in a science-based learning paradigm, allowing us to achieve such rich multimodal interactions
Interpreting Psychophysiological States Using Unobtrusive Wearable Sensors in Virtual Reality
One of the main challenges in the study of human be- havior is to quantitatively assess the participants’ affective states by measuring their psychophysiological signals in ecologically valid conditions. The quality of the acquired data, in fact, is often poor due to artifacts generated by natural interactions such as full body movements and gestures. We created a technology to address this problem. We enhanced the eXperience Induction Machine (XIM), an immersive space we built to conduct experiments on human behavior, with unobtrusive wearable sensors that measure electrocardiogram, breathing rate and electrodermal response. We conducted an empirical validation where participants wearing these sensors were free to move in the XIM space while exposed to a series of visual stimuli taken from the International Affective Picture System (IAPS). Our main result consists in the quan- titative estimation of the arousal range of the affective stimuli through the analysis of participants’ psychophysiological states. Taken together, our findings show that the XIM constitutes a novel tool to study human behavior in life-like conditions
- …
