87 research outputs found

    Natural intensions

    Get PDF
    There is an attractive way to explain representation in terms of adaptivity: roughly, an item R represents a state of affairs S if it has the proper function of co-occurring with S (that is, if the ancestors of R co-occurred with S and this co-occurrence explains why R was selected for, and thus why R exists now). Although this may be an adequate account of the extension or reference of R, what such explanations often neglect is an account of the intension or sense of R: how S is represented by R. No doubt such an account, if correct, would be complex, involving such things as the proper functions of the mechanisms that use R, the mechanisms by which R fulfills its function, and more. But it seems likely that an important step toward such an account would be the identification of the norms that govern this process. The norms of validity and Bayes' Theorem can guide investigations into the actual inferences and probabilistic reasoning that organisms perform. Is there a norm that can do the same for intension-fixing? I argue that before this can be resolved, some problems with the biosemantic account of extension must be resolved. I attempt to do so by offering a complexity-based account of the natural extension of a representation R: for a given set of ancestral co-occurrences Z, the natural extension is the extension of the least complex intension that best covers Z. Minimal description length is considered as a means for measuring complexity. Some advantages of and problems with the account are identified

    A human-centered approach to AI ethics: a perspective from cognitive science

    Get PDF
    This chapter explores a human-centered approach to AI and robot ethics. It demonstrates how a human-centered approach can resolve some problems in AI and robot ethics that arise from the fact that AI systems and robots have cognitive states, and yet have no welfare, and are not responsible. In particular, the approach allows that violence toward robots can be wrong even if robots cannot be harmed. More importantly, the approach encourages people to shift away from designing robots as if they were human ethical deliberators. Ultimately, the cognitive states of AI systems and robots may have a role to play in the proper ethical analysis of situations involving them, even if it is not by virtue of conferring welfare or responsibilities on those systems or robots

    Prevailing theories of consciousness are challenged by novel cross-modal associations acquired between subliminal stimuli

    Get PDF
    While theories of consciousness differ substantially, the ‘conscious access hypothesis’, which aligns consciousness with the global accessibility of information across cortical regions, is present in many of the prevailing frameworks. This account holds that consciousness is necessary to integrate information arising from independent functions such as the specialist processing required by different senses. We directly tested this account by evaluating the potential for associative learning between novel pairs of subliminal stimuli presented in different sensory modalities. First, pairs of subliminal stimuli were presented and then their association assessed by examining the ability of the first stimulus to prime classification of the second. In Experiments 1-4 the stimuli were word-pairs consisting of a male name preceding either a creative or uncreative profession. Participants were subliminally exposed to two name-profession pairs where one name was paired with a creative profession and the other an uncreative profession. A supraliminal task followed requiring the timed classification of one of those two professions. The target profession was preceded by either the name with which it had been subliminally paired (concordant) or the alternate name (discordant). Experiment 1 presented stimuli auditorily, Experiment 2 visually, and Experiment 3 presented names auditorily and professions visually. All three experiments revealed the same inverse priming effect with concordant test pairs associated with significantly slower classification judgements. Experiment 4 sought to establish if learning would be more efficient with supraliminal stimuli and found evidence that a different strategy is adopted when stimuli are consciously perceived. Finally, Experiment 5 replicated the unconscious cross-modal association achieved in Experiment 3 utilising non-linguistic stimuli. The results demonstrate the acquisition of novel cross-modal associations between stimuli which are not consciously perceived and thus challenge the global access hypothesis and those theories embracing it

    Why Everything Doesn't Realize Every Computation

    No full text
    Some have suggested that there is no fact to the matter as to whether or not a particular physical system realizes a particular computational description. This suggestion has been taken to imply that computational states are not "real", and cannot, for example, provide a foundation for the cognitive sciences. In particular, Putnam has argued that every ordinary open physical system realizes every abstract finite automaton, implying that the fact that a particular computational characterization applies to a physical system does not tell one anything about the nature of that system. Putnam's argument is scrutinized, and found inadequate because, among other things, it employs a notion of causation that is too weak. I argue that if one's view of computation involves embeddedness (inputs and outputs) and full causality, one can avoid the universal realizability results. Therefore, the fact that a particular system realizes a particular automaton is not a vacuous one, and is often explanatory. Furthermore, I claim that computation would not necessarily be an explanatorily vacuous notion even if it were universally realizable. Key words. Computation, philosophy of computation, embeddedness, foundations of cognitive science, formality, multiple realization

    Abstract of "Two problems for Higher-Order Thought Theories of Consciousness"

    No full text
    Higher-order thought theories of consciousness (HOT theories) claim that one is in a mental state M consciously iff: 1) one has the thought that one is in M and 2) one actually is in M. I argue that such theories face two related problems, the Modal Problem and the Poor Theory Problem. I show that, that according to HOT theories, it is an empirical possibility that I do not have any conscious mental states. This is the Modal Problem: HOT theories cannot be correct, because it is in fact empirically impossible that I do not have any conscious mental states. The Poor Theory Problem is that HOT theories, I argue, demand that we deny consciousness to any person whose theory of mind is incorrect to the point of failing to refer to their actual mental states. Yet it seems wrong to deny consciousness to someone just because they have an incorrect theory of mind. Insofar as perception is conceptual, higher-order perception theories of consciousness (HOP theories) fall afoul of the same problems. I consider whether a variant of higher-order theories involving non-conceptual representation of one's own mental states might avoid these problems
    corecore