773 research outputs found
Lightning
This Why Files article examines lightning. Lightning is the second deadliest storm-related hazard in the United States. Topics covered include: what lightning is, how it injures and kills, and what has been learned in the past few years from research on nature's electricity. Two experts were interviewed for this article. Educational levels: General public, High school, Intermediate elementary, Middle school
Event-sequence analysis of appraisals and coping during trapshooting performance
This study describes appraisal and coping patterns of trapshooters during competition, via post-performance retrospective verbal reports. Probabilities that an event (e.g., missed target) is followed by another event (e.g., negative appraisal) were calculated and state transitional diagrams were drawn. Event-sequences during critical and non-critical performance periods were compared. Negative appraisals were most likely before and after missed targets and hits with the second shot. Positive appraisals were most likely before problem-focused coping and after emotion-focused coping. These findings support the process view of coping by illustrating that athletes cope with a variety of situations via a complex set of appraisals
Theory Acquisition as Stochastic Search
We present an algorithmic model for the development of children’s
intuitive theories within a hierarchical Bayesian framework,
where theories are described as sets of logical laws
generated by a probabilistic context-free grammar. Our algorithm
performs stochastic search at two levels of abstraction
– an outer loop in the space of theories, and an inner loop in
the space of explanations or models generated by each theory
given a particular dataset – in order to discover the theory
that best explains the observed data. We show that this model
is capable of learning correct theories in several everyday domains,
and discuss the dynamics of learning in the context of
children’s cognitive development.United States. Air Force Office of Scientific Research (AFOSR (FA9550-07-1-0075)United States. Office of Naval Research (ONR (N00014-09-0124)James S. McDonnell Foundation (Causal Learning Collaborative Initiative
Compositional Policy Priors
This paper describes a probabilistic framework for incorporating structured inductive biases into reinforcement learning. These inductive biases arise from policy priors, probability distributions over optimal policies. Borrowing recent ideas from computational linguistics and Bayesian nonparametrics, we define several families of policy priors that express compositional, abstract structure in a domain. Compositionality is expressed using probabilistic context-free grammars, enabling a compact representation of hierarchically organized sub-tasks. Useful sequences of sub-tasks can be cached and reused by extending the grammars nonparametrically using Fragment Grammars. We present Monte Carlo methods for performing inference, and show how structured policy priors lead to substantially faster learning in complex domains compared to methods without inductive biases.This work was supported by AFOSR FA9550-07-1-0075 and ONR
N00014-07-1-0937. SJG was supported by a Graduate Research Fellowship from the NSF
The Infinite Latent Events Model
We present the Infinite Latent Events Model, a nonparametric hierarchical Bayesian distribution over infinite dimensional Dynamic Bayesian Networks with binary state representations and noisy-OR-like transitions. The distribution can be used to learn structure in discrete timeseries data by simultaneously inferring a set of latent events, which events fired at each timestep, and how those events are causally linked. We illustrate the model on a sound factorization task, a network topology identification task, and a video game task.NTT Communication Science LaboratoriesUnited States. Air Force Office of Scientific Research (AFOSR FA9550-07-1-0075)United States. Office of Naval Research (ONR N00014-07-1-0937)National Science Foundation (U.S.) (Graduate Research Fellowship)United States. Army Research Office (ARO W911NF-08-1-0242)James S. McDonnell Foundation (Causal Learning Collaborative Initiative
Nonparametric Bayesian Policy Priors for Reinforcement Learning
We consider reinforcement learning in partially observable domains where the agent can query an expert for
demonstrations. Our nonparametric Bayesian approach combines model knowledge, inferred from expert information and independent exploration, with policy knowledge inferred from expert trajectories. We introduce priors that bias the agent towards models with both simple representations and simple policies, resulting in improved policy and model learning
A Compositional Object-Based Approach to Learning Physical Dynamics
We present the Neural Physics Engine (NPE), an object-based neural network architecture for learning predictive models of intuitive physics. We propose a factorization of a physical scene into composable object-based representations and also the NPE architecture whose compositional structure factorizes object dynamics into pairwise interactions. Our approach draws on the strengths of both symbolic and neural approaches: like a symbolic physics engine, the NPE is endowed with generic notions of objects and their interactions, but as a neural network it can also be trained via stochastic gradient descent to adapt to specific object properties and dynamics of different worlds. We evaluate the efficacy of our approach on simple rigid body dynamics in two-dimensional worlds. By comparing to less structured architectures, we show that our model's compositional representation of the structure in physical interactions improves its ability to predict movement, generalize to different numbers of objects, and infer latent properties of objects such as mass.National Science Foundation (U.S.) (Award CCF-1231216)United States. Office of Naval Research (Grant N00014-16-1-2007
Learning a theory of causality
The very early appearance of abstract knowledge is often taken as evidence for innateness. We explore the relative learning speeds of abstract and specific knowledge within a Bayesian framework and the role for innate structure. We focus on knowledge about causality, seen as a domain-general intuitive theory, and ask whether this knowledge can be learned from co-occurrence of events. We begin by phrasing the causal Bayes nets theory of causality and a range of alternatives in a logical language for relational theories. This allows us to explore simultaneous inductive learning of an abstract theory of causality and a causal model for each of several causal systems. We find that the correct theory of causality can be learned relatively quickly, often becoming available before specific causal theories have been learned—an effect we term the blessing of abstraction. We then explore the effect of providing a variety of auxiliary evidence and find that a collection of simple perceptual input analyzers can help to bootstrap abstract knowledge. Together, these results suggest that the most efficient route to causal knowledge may be to build in not an abstract notion of causality but a powerful inductive learning mechanism and a variety of perceptual supports. While these results are purely computational, they have implications for cognitive development, which we explore in the conclusion.James S. McDonnell Foundation (Causal Learning Collaborative Initiative)United States. Office of Naval Research (Grant N00014-09-0124)United States. Air Force Office of Scientific Research (Grant FA9550-07-1-0075)United States. Army Research Office (Grant W911NF-08-1-0242
Bayes and Blickets: Effects of Knowledge on Causal Induction in Children and Adults
People are adept at inferring novel causal relations, even from only a few observations. Prior knowledge about the probability of encountering causal relations of various types and the nature of the mechanisms relating causes and effects plays a crucial role in these inferences. We test a formal account of how this knowledge can be used and acquired, based on analyzing causal induction as Bayesian inference. Five studies explored the predictions of this account with adults and 4-year-olds, using tasks in which participants learned about the causal properties of a set of objects. The studies varied the two factors that our Bayesian approach predicted should be relevant to causal induction: the prior probability with which causal relations exist, and the assumption of a deterministic or a probabilistic relation between cause and effect. Adults’ judgments (Experiments 1, 2, and 4) were in close correspondence with the quantitative predictions of the model, and children’s judgments (Experiments 3 and 5) agreed qualitatively with this account.Mitsubishi Electronic Research LaboratoriesUnited States. Air Force Office of Sponsored ResearchMassachusetts Institute of Technology. Paul E. Newton ChairJames S. McDonnell Foundatio
- …
