28 research outputs found
Applying multi-criteria optimisation to develop cognitive models
A scientific theory is developed by modelling empirical data in a range of domains. The goal of developing a theory is to optimise the fit of the theory to as many experimental settings as possible, whilst retaining some qualitative properties such as `parsimony' or `comprehensibility'. We formalise the task of developing theories of human cognition as a problem in multi-criteria optimisation. There are many challenges in this task, including the representation of competing theories, coordinating the fit with multiple experiments, and bringing together competing results to provide suitable theories. Experiments demonstrate the development of a theory of categorisation, using multiple optimisation criteria in genetic algorithms to locate pareto-optimal sets
CHREST tutorial: Simulations of human learning
CHREST (Chunk Hierarchy and REtrieval STructures) is a comprehensive, computational model of human learning and perception. It has been used to successfully simulate data in a variety of domains, including: the acquisition of syntactic categories, expert behaviour, concept formation, implicit learning, and the acquisition of multiple representations in physics for problem solving. The aim of this tutorial is to provide participants with an introduction to CHREST, how it can be used to model various phenomena, and the knowledge to carry out their own modelling experiments
EPAM/CHREST tutorial: Fifty years of simulating human learning
Generating quantitative predictions for complex cognitive phenomena requires precise implementations of the underlying cognitive theory. This tutorial focuses on the EPAM/CHREST tradition, which has been providing significant models of human behaviour for 50 years
Towards a model of expectation-driven perception
Human perception is an active process by which
meaningful information is gathered from the
external environment. Application areas such as
human-computer interaction (HCI), or the role of
human experts in image analysis, highlight the
need to understand how humans, especially experts,
use prior information when interpreting what they
see. Here, we describe how CHREST, a model of expert
perception, is currently being extended to support
expectation-driven perception of bitmap-level image
data, focusing particularly on its ability to learn
semantic interpretations
Computational models of the development of perceptual expertise
In a recent article, Palmeri, Wong and Gauthier have argued that computational models may help direct hypotheses about the development of perceptual expertise. They support their claim by an analysis of models from the object-recognition and perceptual-categorization literatures. Surprisingly, however, they do not consider any computational models from traditional research into expertise, essentially the research deriving from Chase and Simon’s chunking theory, which itself was influenced by De Groot’s study of chessplayers. This is unfortunate, as a series of computational models based on perceptual chunking have explained a substantial number of phenomena related to expert behaviour and provide mechanisms that directly address the question of perceptual expertise
An investigation into the effect of ageing on expert memory with CHREST
CHREST is a cognitive architecture that models human perception, learning, memory, and problem solving, and which has successfully simulated numerous human experimental data on chess. In this paper, we describe an investigation into the effects of ageing on expert memory using CHREST. The results of the simulations are related to the literature on ageing. The study illustrates how Computational Intelligence can be used to understand complex phenomena that are affected by multiple variables dynamically evolving as a function of time and that have direct practical implications for human societies
Simple environments fail as illustrations of intelligence: A review of R. Pfeifer and C. Scheier
The field of cognitive science has always supported a variety of modes of research, often polarised into those seeking high-level explanations of intelligence and those seeking low-level, perhaps even neuro-physiological, explanations. Each of these research directions permits, at least in part, a similar methodology based around the construction of detailed computational models, which justify their explanatory claims by matching behavioural data. We are fortunate at this time to witness the culmination of several decades of work from each of these research directions, and hopefully to find within them the basic ideas behind a complete theory of human intelligence. It is in this spirit that Rolf Pfeifer and Christian Scheier have written their book Understanding Intelligence. However, their aim is manifestly not to present an overview of all prior work in this field, but instead to argue forcefully for one particular interpretation – a synthetic approach, based around the explicit construction of autonomous agents. This approach is characterised by the Embodiment Hypothesis, which is presented as a complete framework for investigating intelligence, and exemplified by a number of computational models and robots to illustrate just how the field of cognitive science might develop in the future. We first provide an overview of their book, before describing some of our reservations about its contribution towards an understanding of intelligence
A distributed framework for semi-automatically developing architectures of brain and mind
Developing comprehensive theories of low-level neuronal brain processes and high-level cognitive behaviours, as well as integrating them, is an ambitious challenge that requires new conceptual, computational, and empirical tools. Given the complexities of these theories, they will almost certainly be expressed as computational systems. Here, we propose to use recent developments in grid technology to develop a system of evolutionary scientific discovery, which will (a) enable empirical researchers to make their data widely available for use in developing and testing theories, and (b) enable theorists to semi-automatically develop computational theories. We illustrate these ideas with a case study taken from the domain of categorisation
Checking chess checks with chunks: A model of simple check detection
The procedure by which humans identify checks in check positions is not well understood. We report here our experience in modelling this process with CHREST, a general-purpose cognitive model that has previously successfully captured a variety of attention- and perception-related phenomena. We have attempted to reproduce the results of an experiment investigating the ability of humans to determine checks in simple chess positions. We propose a specific model of how humans perform this experiment, and show that, given certain reasonable assumptions, CHREST can follow this model to create a good reproduction of the data
Learning perceptual schemas to avoid the utility problem
This paper describes principles for representing and organising planning knowledge in a machine learning architecture. One of the difficulties with learning about tasks requiring planning is the utility problem: as more knowledge is acquired by the learner, the utilisation of that knowledge takes on a complexity which overwhelms the mechanisms of the original task. This problem does not, however, occur with human learners: on the contrary, it is usually the case that, the more knowledgeable the learner, the greater the efficiency and accuracy in locating a solution. The reason for this lies in the types of knowledge acquired by the human learner and its organisation. We describe the basic representations which underlie the superior abilities of human experts, and describe algorithms for using equivalent representations in a machine learning architecture
