301 research outputs found
Listening to limericks: a pupillometry investigation of perceivers’ expectancy
What features of a poem make it captivating, and which cognitive mechanisms are sensitive to these features? We addressed these questions experimentally by measuring pupillary responses of 40 participants who listened to a series of Limericks. The Limericks ended with either a semantic, syntactic, rhyme or metric violation. Compared to a control condition without violations, only the rhyme violation condition induced a reliable pupillary response. An anomaly-rating study on the same stimuli showed that all violations were reliably detectable relative to the control condition, but the anomaly induced by rhyme violations was perceived as most severe. Together, our data suggest that rhyme violations in Limericks may induce an emotional response beyond mere anomaly detection
A behavioral database for masked form priming
Reading involves a process of matching an orthographic input with stored representations in lexical memory. The masked priming paradigm has become a standard tool for investigating this process. Use of existing results from this paradigm can be limited by the precision of the data and the need for cross-experiment comparisons that lack normal experimental controls. Here, we present a single, large, high-precision, multicondition experiment to address these problems. Over 1,000 participants from 14 sites responded to 840 trials involving 28 different types of orthographically related primes (e.g., castfe–CASTLE) in a lexical decision task, as well as completing measures of spelling and vocabulary. The data were indeed highly sensitive to differences between conditions: After correction for multiple comparisons, prime type condition differences of 2.90 ms and above reached significance at the 5% level. This article presents the method of data collection and preliminary findings from these data, which included replications of the most widely agreed-upon differences between prime types, further evidence for systematic individual differences in susceptibility to priming, and new evidence regarding lexical properties associated with a target word’s susceptibility to priming. These analyses will form a basis for the use of these data in quantitative model fitting and evaluation and for future exploration of these data that will inform and motivate new experiments
Participant Nonnaiveté and the reproducibility of cognitive psychology
Many argue that there is a reproducibility crisis in psychology. We investigated nine well-known effects from the cognitive psychology literature—three each from the domains of perception/action, memory, and language, respectively—and found that they are highly reproducible. Not only can they be reproduced in online environments, but they also can be reproduced with nonnaïve participants with no reduction of effect size. Apparently, some cognitive tasks are so constraining that they encapsulate behavior from external influences, such as testing situation and prior recent experience with the experiment to yield highly robust effects
Transposed-letter priming effects in reading aloud words and nonwords
A masked nonword prime generated by transposing adjacent inner letters in a word (e.g., jugde) facilitates the recognition of the target word (JUDGE) more than a prime in which the relevant letters are replaced by different letters (e.g., junpe). This transposed-letter (TL) priming effect has been widely interpreted as evidence that the coding of letter position is flexible, rather than precise. Although the TL priming effect has been extensively investigated in the domain of visual word recognition using the lexical decision task, very few studies have investigated this empirical phenomenon in reading aloud. In the present study, we investigated TL priming effects in reading aloud words and nonwords and found that these effects are of equal magnitude for the two types of items. We take this result as support for the view that the TL priming effect arises from noisy perception of letter order within the prime prior to the mapping of orthography to phonology.6 page(s
AUX: A scripting language for auditory signal processing and software packages for psychoacoustic experiments and education
This article introduces AUX (AUditory syntaX), a scripting syntax specifically designed to describe auditory signals and processing, to the members of the behavioral research community. The syntax is based on descriptive function names and intuitive operators suitable for researchers and students without substantial training in programming, who wish to generate and examine sound signals using a written script. In this article, the essence of AUX is discussed and practical examples of AUX scripts specifying various signals are illustrated. Additionally, two accompanying Windows-based programs and development libraries are described. AUX Viewer is a program that generates, visualizes, and plays sounds specified in AUX. AUX Viewer can also be used for class demonstrations or presentations. Another program, Psycon, allows a wide range of sound signals to be used as stimuli in common psychophysical testing paradigms, such as the adaptive procedure, the method of constant stimuli, and the method of adjustment. AUX Library is also provided, so that researchers can develop their own programs utilizing AUX. The philosophical basis of AUX is to separate signal generation from the user interface needed for experiments. AUX scripts are portable and reusable; they can be shared by other researchers, regardless of differences in actual AUX-based programs, and reused for future experiments. In short, the use of AUX can be potentially beneficial to all members of the research community—both those with programming backgrounds and those without
Active Vision during Action Execution, Observation and Imagery: Evidence for Shared Motor Representations
The concept of shared motor representations between action execution and various covert conditions has been demonstrated through a number of psychophysiological modalities over the past two decades. Rarely, however, have
researchers considered the congruence of physical, imaginary and observed movement markers in a single paradigm and never in a design where eye movement metrics are the markers. In this study, participants were required to perform a forward reach and point Fitts’ Task on a digitizing tablet whilst wearing an eye movement system. Gaze metrics were used to compare behaviour congruence between action execution, action observation, and guided and unguided movement imagery conditions. The data showed that participants attended the same task-related visual cues between conditions but the strategy was different. Specifically, the number of fixations was significantly different between action execution and all covert conditions. In addition, fixation duration was congruent between action execution and action observation only, and
both conditions displayed an indirect Fitts’ Law effect. We therefore extend the understanding of the common motor representation by demonstrating, for the first time, common spatial eye movement metrics across simulation conditions
and some specific temporal congruence for action execution and action observation. Our findings suggest that action
observation may be an effective technique in supporting motor processes. The use of video as an adjunct to physical
techniques may be beneficial in supporting motor planning in both performance and clinical rehabilitation environments
Towards the automated localisation of targets in rapid image-sifting by collaborative brain-computer interfaces
The N2pc is a lateralised Event-Related Potential (ERP) that signals a shift of attention towards the location of a potential object of interest. We propose a single-trial target-localisation collaborative Brain-Computer Interface (cBCI) that exploits this ERP to automatically approximate the horizontal position of targets in aerial images. Images were presented by means of the rapid serial visual presentation technique at rates of 5, 6 and 10 Hz. We created three different cBCIs and tested a participant selection method in which groups are formed according to the similarity of participants’ performance. The N2pc that is elicited in our experiments contains information about the position of the target along the horizontal axis. Moreover, combining information from multiple participants provides absolute median improvements in the area under the receiver operating characteristic curve of up to 21% (for groups of size 3) with respect to single-user BCIs. These improvements are bigger when groups are formed by participants with similar individual performance, and much of this effect can be explained using simple theoretical models. Our results suggest that BCIs for automated triaging can be improved by integrating two classification systems: one devoted to target detection and another to detect the attentional shifts associated with lateral targets
Recommended from our members
Recognition of dance-like actions: memory for static posture or dynamic movement?
Dance-like actions are complex visual stimuli involving multiple changes in body posture across time and space. Visual perception research has demonstrated a difference between the processing of dynamic body movement and the processing of static body posture. Yet, it is unclear whether this processing dissociation continues during the retention of body movement and body form in visual working memory (VWM). When observing a dance-like action, it is likely that static snapshot images of body posture will be retained alongside dynamic images of the complete motion. Therefore, we hypothesized that, as in perception, posture and movement would differ in VWM. Additionally, if body posture and body movement are separable in VWM, as form- and motion-based items, respectively, then differential interference from intervening form and motion tasks should occur during recognition. In two experiments, we examined these hypotheses. In Experiment 1, the recognition of postures and movements was tested in conditions in which the formats of the study and test stimuli matched (movement-study to movement-test, posture-study to posture-test) or mismatched (movement-study to posture-test, posture-study to movement-test). In Experiment 2, the recognition of postures and movements was compared after intervening form and motion tasks. These results indicated that (1) the recognition of body movement based only on posture is possible, but it is significantly poorer than recognition based on the entire movement stimulus, and (2) form-based interference does not impair memory for movements, although motion-based interference does. We concluded that, whereas static posture information is encoded during the observation of dance-like actions, body movement and body posture differ in VWM
Connectionist perspectives on language learning, representation and processing.
The field of formal linguistics was founded on the premise that language is mentally represented as a deterministic symbolic grammar. While this approach has captured many important characteristics of the world\u27s languages, it has also led to a tendency to focus theoretical questions on the correct formalization of grammatical rules while also de-emphasizing the role of learning and statistics in language development and processing. In this review we present a different approach to language research that has emerged from the parallel distributed processing or \u27connectionist\u27 enterprise. In the connectionist framework, mental operations are studied by simulating learning and processing within networks of artificial neurons. With that in mind, we discuss recent progress in connectionist models of auditory word recognition, reading, morphology, and syntactic processing. We argue that connectionist models can capture many important characteristics of how language is learned, represented, and processed, as well as providing new insights about the source of these behavioral patterns. Just as importantly, the networks naturally capture irregular (non-rule-like) patterns that are common within languages, something that has been difficult to reconcile with rule-based accounts of language without positing separate mechanisms for rules and exceptions
The influence of early aging on eye movements during motor simulation
Movement based interventions such as imagery and action observation are used increasingly to support physical rehabilitation of adults during early aging. The efficacy of these more covert approaches is based on an intuitively appealing assumption that movement execution, imagery and observation share neural substrate; alteration of one influences directly the function of the other two. Using eye movement metrics this paper reports findings that question the congruency of the three conditions. The data reveal that simulating movement through imagery and action observation may offer older adults movement practice conditions that are not constrained by the age-related decline observed in physical conditions. In addition, the findings provide support for action observation as a more effective technique for movement reproduction in comparison to imagery. This concern for imagery was also seen in the less congruent temporal relationship in movement time between imagery and movement execution suggesting imagery inaccuracy in early aging
- …
