1,920 research outputs found
Causal Responsibility and Counterfactuals.
How do people attribute responsibility in situations where the contributions of multiple agents combine to produce a joint outcome? The prevalence of over-determination in such cases makes this a difficult problem for counterfactual theories of causal responsibility. In this article, we explore a general framework for assigning responsibility in multiple agent contexts. We draw on the structural model account of actual causation (e.g., Halpern & Pearl, 2005) and its extension to responsibility judgments (Chockler & Halpern, 2004). We review the main theoretical and empirical issues that arise from this literature and propose a novel model of intuitive judgments of responsibility. This model is a function of both pivotality (whether an agent made a difference to the outcome) and criticality (how important the agent is perceived to be for the outcome, before any actions are taken). The model explains empirical results from previous studies and is supported by a new experiment that manipulates both pivotality and criticality. We also discuss possible extensions of this model to deal with a broader range of causal situations. Overall, our approach emphasizes the close interrelations between causality, counterfactuals, and responsibility attributions
Physical problem solving: Joint planning with symbolic, geometric, and dynamic constraints
In this paper, we present a new task that investigates how people interact
with and make judgments about towers of blocks. In Experiment~1, participants
in the lab solved a series of problems in which they had to re-configure three
blocks from an initial to a final configuration. We recorded whether they used
one hand or two hands to do so. In Experiment~2, we asked participants online
to judge whether they think the person in the lab used one or two hands. The
results revealed a close correspondence between participants' actions in the
lab, and the mental simulations of participants online. To explain
participants' actions and mental simulations, we develop a model that plans
over a symbolic representation of the situation, executes the plan using a
geometric solver, and checks the plan's feasibility by taking into account the
physical constraints of the scene. Our model explains participants' actions and
judgments to a high degree of quantitative accuracy
Explaining intuitive difficulty judgments by modeling physical effort and risk
The ability to estimate task difficulty is critical for many real-world
decisions such as setting appropriate goals for ourselves or appreciating
others' accomplishments. Here we give a computational account of how humans
judge the difficulty of a range of physical construction tasks (e.g., moving 10
loose blocks from their initial configuration to their target configuration,
such as a vertical tower) by quantifying two key factors that influence
construction difficulty: physical effort and physical risk. Physical effort
captures the minimal work needed to transport all objects to their final
positions, and is computed using a hybrid task-and-motion planner. Physical
risk corresponds to stability of the structure, and is computed using noisy
physics simulations to capture the costs for precision (e.g., attention,
coordination, fine motor movements) required for success. We show that the full
effort-risk model captures human estimates of difficulty and construction time
better than either component alone
Concepts in a Probabilistic Language of Thought
Note: The book chapter is reprinted courtesy of The MIT Press, from the forthcoming edited collection “The Conceptual Mind: New Directions in the Study of Concepts” edited by Eric Margolis and Stephen Laurence, print date Spring 2015.Knowledge organizes our understanding of the world, determining what we expect given what we have already seen. Our predictive representations have two key properties: they are productive, and they are graded. Productive generalization is possible because our knowledge decomposes into concepts—elements of knowledge that are combined and recombined to describe particular situations. Gradedness is the observable effect of accounting for uncertainty—our knowledge encodes degrees of belief that lead to graded probabilistic predictions. To put this a different way, concepts form a combinatorial system that enables description of many different situations; each such situation specifies a distribution over what we expect to see in the world, given what we have seen. We may think of this system as a probabilistic language of thought (PLoT) in which representations are built from language-like composition of concepts and the content of those representations is a probability distribution on world states. The purpose of this chapter is to formalize these ideas in computational terms, to illustrate key properties of the PLoT approach with a concrete example, and to draw connections with other views of conceptual structure.This work was supported by ONR awards N00014-09-1-0124 and N00014-13-
1-0788, by a John S. McDonnell Foundation Scholar Award, and by the Center
for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF - 1231216
Annealing of superconducting magnet protection diodes for the LHC after irradiation at liquid helium temperatures
Irradiation Tests at Cryogenic Temperatures on Diffusion Type Diodes for the LHC Superconducting Magnet Protection
Within the framework of the LHC magnet protection system, the irradiation hardness of high current by-pass diodes is subject to examination. The relocation of these diodes and recent calculations give rather low irradiation levels for the position of the diodes. This offers the possibility to replace the originally foreseen epitaxial type diodes by diffusion type diodes. Therefore, different types of 75mm diffusion diodes were submitted to an irradiation test program. One part of the experiments was performed in the Munich Research Reactor. Further irradiation tests were carried out in the northern fixed target area of the SPS accelerator at CERN
Eye-Tracking Causality
How do people make causal judgments? What role, if any, does counterfactual simulation play? Counterfactual theories of causal judgments predict that people compare what actually happened with what would have happened if the candidate cause had been absent. Process theories predict that people focus only on what actually happened, to assess the mechanism linking candidate cause and outcome. We tracked participants' eye movements while they judged whether one billiard ball caused another one to go through a gate or prevented it from going through. Both participants' looking patterns and their judgments demonstrated that counterfactual simulation played a critical role. Participants simulated where the target ball would have gone if the candidate cause had been removed from the scene. The more certain participants were that the outcome would have been different, the stronger the causal judgments. These results provide the first direct evidence for spontaneous counterfactual simulation in an important domain of high-level cognition
- …
