1,192 research outputs found
Reinstated episodic context guides sampling-based decisions for reward.
How does experience inform decisions? In episodic sampling, decisions are guided by a few episodic memories of past choices. This process can yield choice patterns similar to model-free reinforcement learning; however, samples can vary from trial to trial, causing decisions to vary. Here we show that context retrieved during episodic sampling can cause choice behavior to deviate sharply from the predictions of reinforcement learning. Specifically, we show that, when a given memory is sampled, choices (in the present) are influenced by the properties of other decisions made in the same context as the sampled event. This effect is mediated by fMRI measures of context retrieval on each trial, suggesting a mechanism whereby cues trigger retrieval of context, which then triggers retrieval of other decisions from that context. This result establishes a new avenue by which experience can guide choice and, as such, has broad implications for the study of decisions
Variation of outdoor illumination as a function of solar elevation and light pollution
The illumination of the environment undergoes both intensity and spectral changes during the 24 h cycle of a day. Daylight spectral power distributions are well described by low-dimensional models such as the CIE (Commission Internationale de l'Éclairage) daylight model, but the performance of this model in non-daylight regimes is not characterised. We measured downwelling spectral irradiance across multiple days in two locations in North America: One rural location (Cherry Springs State Park, PA) with minimal anthropogenic light sources, and one city location (Philadelphia, PA). We characterise the spectral, intensity and colour changes and extend the existing CIE model for daylight to capture twilight components and the spectrum of the night sky
Chromatic Illumination Discrimination Ability Reveals that Human Colour Constancy Is Optimised for Blue Daylight Illuminations
The phenomenon of colour constancy in human visual perception keeps surface colours constant, despite changes in their reflected light due to changing illumination. Although colour constancy has evolved under a constrained subset of illuminations, it is unknown whether its underlying mechanisms, thought to involve multiple components from retina to cortex, are optimised for particular environmental variations. Here we demonstrate a new method for investigating colour constancy using illumination matching in real scenes which, unlike previous methods using surface matching and simulated scenes, allows testing of multiple, real illuminations. We use real scenes consisting of solid familiar or unfamiliar objects against uniform or variegated backgrounds and compare discrimination performance for typical illuminations from the daylight chromaticity locus (approximately blue-yellow) and atypical spectra from an orthogonal locus (approximately red-green, at correlated colour temperature 6700 K), all produced in real time by a 10-channel LED illuminator. We find that discrimination of illumination changes is poorer along the daylight locus than the atypical locus, and is poorest particularly for bluer illumination changes, demonstrating conversely that surface colour constancy is best for blue daylight illuminations. Illumination discrimination is also enhanced, and therefore colour constancy diminished, for uniform backgrounds, irrespective of the object type. These results are not explained by statistical properties of the scene signal changes at the retinal level. We conclude that high-level mechanisms of colour constancy are biased for the blue daylight illuminations and variegated backgrounds to which the human visual system has typically been exposed
Interaction of perceptual grouping and crossmodal temporal capture in tactile apparent-motion
Previous studies have shown that in tasks requiring participants to report the direction of apparent motion, task-irrelevant mono-beeps can "capture'' visual motion perception when the beeps occur temporally close to the visual stimuli. However, the contributions of the relative timing of multimodal events and the event structure, modulating uni- and/or crossmodal perceptual grouping, remain unclear. To examine this question and extend the investigation to the tactile modality, the current experiments presented tactile two-tap apparent-motion streams, with an SOA of 400 ms between successive, left-/right-hand middle-finger taps, accompanied by task-irrelevant, non-spatial auditory stimuli. The streams were shown for 90 seconds, and participants' task was to continuously report the perceived (left-or rightward) direction of tactile motion. In Experiment 1, each tactile stimulus was paired with an auditory beep, though odd-numbered taps were paired with an asynchronous beep, with audiotactile SOAs ranging from -75 ms to 75 ms. Perceived direction of tactile motion varied systematically with audiotactile SOA, indicative of a temporal-capture effect. In Experiment 2, two audiotactile SOAs-one short (75 ms), one long (325 ms)-were compared. The long-SOA condition preserved the crossmodal event structure (so the temporal-capture dynamics should have been similar to that in Experiment 1), but both beeps now occurred temporally close to the taps on one side (even-numbered taps). The two SOAs were found to produce opposite modulations of apparent motion, indicative of an influence of crossmodal grouping. In Experiment 3, only odd-numbered, but not even-numbered, taps were paired with auditory beeps. This abolished the temporal-capture effect and, instead, a dominant percept of apparent motion from the audiotactile side to the tactile-only side was observed independently of the SOA variation. These findings suggest that asymmetric crossmodal grouping leads to an attentional modulation of apparent motion, which inhibits crossmodal temporal-capture effects
Dynamics of trimming the content of face representations for categorization in the brain
To understand visual cognition, it is imperative to determine when, how and with what information the human brain categorizes the visual input. Visual categorization consistently involves at least an early and a late stage: the occipito-temporal N170 event related potential related to stimulus encoding and the parietal P300 involved in perceptual decisions. Here we sought to understand how the brain globally transforms its representations of face categories from their early encoding to the later decision stage over the 400 ms time window encompassing the N170 and P300 brain events. We applied classification image techniques to the behavioral and electroencephalographic data of three observers who categorized seven facial expressions of emotion and report two main findings: (1) Over the 400 ms time course, processing of facial features initially spreads bilaterally across the left and right occipito-temporal regions to dynamically converge onto the centro-parietal region; (2) Concurrently, information processing gradually shifts from encoding common face features across all spatial scales (e.g. the eyes) to representing only the finer scales of the diagnostic features that are richer in useful information for behavior (e.g. the wide opened eyes in 'fear'; the detailed mouth in 'happy'). Our findings suggest that the brain refines its diagnostic representations of visual categories over the first 400 ms of processing by trimming a thorough encoding of features over the N170, to leave only the detailed information important for perceptual decisions over the P300
A database of whole-body action videos for the study of action, emotion, and untrustworthiness
We present a database of high-definition (HD) videos for the study of traits inferred from whole-body actions. Twenty-nine actors (19 female) were filmed performing different actions—walking, picking up a box, putting down a box, jumping, sitting down, and standing and acting—while conveying different traits, including four emotions (anger, fear, happiness, sadness), untrustworthiness, and neutral, where no specific trait was conveyed. For the actions conveying the four emotions and untrustworthiness, the actions were filmed multiple times, with the actor conveying the traits with different levels of intensity. In total, we made 2,783 action videos (in both two-dimensional and three-dimensional format), each lasting 7 s with a frame rate of 50 fps. All videos were filmed in a green-screen studio in order to isolate the action information from all contextual detail and to provide a flexible stimulus set for future use. In order to validate the traits conveyed by each action, we asked participants to rate each of the actions corresponding to the trait that the actor portrayed in the two-dimensional videos. To provide a useful database of stimuli of multiple actions conveying multiple traits, each video name contains information on the gender of the actor, the action executed, the trait conveyed, and the rating of its perceived intensity. All videos can be downloaded free at the following address: http://www-users.york.ac.uk/~neb506/databases.html. We discuss potential uses for the database in the analysis of the perception of whole-body actions
Pulses of melanopsin-directed contrast produce highly reproducible pupil responses that are insensitive to a change in background radiance
Purpose:To measure the pupil response to pulses of melanopsin-directed contrast, and compare this response to those evoked by cone-directed contrast and spectrally narrowband stimuli. Methods:Three-second unipolar pulses were used to elicit pupil responses in human subjects across three sessions. Thirty subjects were studied in session 1, and most returned for sessions 2 and 3. The stimuli of primary interest were "silent substitution" cone- and melanopsin-directed modulations. Red and blue narrowband pulses delivered using the post-illumination pupil response (PIPR) paradigm were also studied. Sessions 1 and 2 were identical, whereas session 3 involved modulations around higher radiance backgrounds. The pupil responses were fit by a model whose parameters described response amplitude and temporal shape. Results:Group average pupil responses for all stimuli overlapped extensively across sessions 1 and 2, indicating high reproducibility. Model fits indicate that the response to melanopsin-directed contrast is prolonged relative to that elicited by cone-directed contrast. The group average cone- and melanopsin-directed pupil responses from session 3 were highly similar to those from sessions 1 and 2, suggesting that these responses are insensitive to background radiance over the range studied. The increase in radiance enhanced persistent pupil constriction to blue light. Conclusions:The group average pupil response to stimuli designed through silent substitution provides a reliable probe of the function of a melanopsin-mediated system in humans. As disruption of the melanopsin system may relate to clinical pathology, the reproducibility of response suggests that silent substitution pupillometry can test if melanopsin signals differ between clinical groups
Local biases drive, but do not determine, the perception of illusory trajectories
When a dot moves horizontally across a set of tilted lines of alternating orientations, the dot appears to be moving up and down along its trajectory. This perceptual phenomenon, known as the slalom illusion, reveals a mismatch between the veridical motion signals and the subjective percept of the motion trajectory, which has not been comprehensively explained. In the present study, we investigated the empirical boundaries of the slalom illusion using psychophysical methods. The phenomenon was found to occur both under conditions of smooth pursuit eye movements and constant fixation, and to be consistently amplified by intermittently occluding the dot trajectory. When the motion direction of the dot was not constant, however, the stimulus display did not elicit the expected illusory percept. These findings confirm that a local bias towards perpendicularity at the intersection points between the dot trajectory and the tilted lines cause the illusion, but also highlight that higher-level cortical processes are involved in interpreting and amplifying the biased local motion signals into a global illusion of trajectory perception
Natural images from the birthplace of the human eye
Here we introduce a database of calibrated natural images publicly available
through an easy-to-use web interface. Using a Nikon D70 digital SLR camera, we
acquired about 5000 six-megapixel images of Okavango Delta of Botswana, a
tropical savanna habitat similar to where the human eye is thought to have
evolved. Some sequences of images were captured unsystematically while
following a baboon troop, while others were designed to vary a single parameter
such as aperture, object distance, time of day or position on the horizon.
Images are available in the raw RGB format and in grayscale. Images are also
available in units relevant to the physiology of human cone photoreceptors,
where pixel values represent the expected number of photoisomerizations per
second for cones sensitive to long (L), medium (M) and short (S) wavelengths.
This database is distributed under a Creative Commons Attribution-Noncommercial
Unported license to facilitate research in computer vision, psychophysics of
perception, and visual neuroscience.Comment: Submitted to PLoS ON
Fractionation of parietal function in bistable perception probed with concurrent TMS-EEG
When visual input has conflicting interpretations, conscious perception can alternate spontaneously between these possible interpretations. This is called bistable perception. Previous neuroimaging studies have indicated the involvement of two right parietal areas in resolving perceptual ambiguity (ant-SPLr and post-SPLr). Transcranial magnetic stimulation (TMS) studies that selectively interfered with the normal function of these regions suggest that they play opposing roles in this type of perceptual switch. In the present study, we investigated this fractionation of parietal function by use of combined TMS with electroencephalography (EEG). Specifically, while participants viewed either a bistable stimulus, a replay stimulus, or resting-state fixation, we applied single pulse TMS to either location independently while simultaneously recording EEG. Combined with participant’s individual structural magnetic resonance imaging (MRI) scans, this dataset allows for complex analyses of the effect of TMS on neural time series data, which may further elucidate the causal role of the parietal cortex in ambiguous perception
- …
