87 research outputs found
Natural intensions
There is an attractive way to explain representation in terms of adaptivity: roughly, an item R represents a state of affairs S if it has the proper function of co-occurring with S (that is, if the ancestors of R co-occurred with S and this co-occurrence explains why R was selected for, and thus why R exists now). Although this may be an adequate account of the extension or reference of R, what such explanations often neglect is an account of the intension or sense of R: how S is represented by R. No doubt such an account, if correct, would be complex, involving such things as the proper functions of the mechanisms that use R, the mechanisms by which R fulfills its function, and more. But it seems likely that an important step toward such an account would be the identification of the norms that govern this process. The norms of validity and Bayes' Theorem can guide investigations into the actual inferences and probabilistic reasoning that organisms perform. Is there a norm that can do the same for intension-fixing? I argue that before this can be resolved, some problems with the biosemantic account of extension must be resolved. I attempt to do so by offering a complexity-based account of the natural extension of a representation R: for a given set of ancestral co-occurrences Z, the natural extension is the extension of the least complex intension that best covers Z. Minimal description length is considered as a means for measuring complexity. Some advantages of and problems with the account are identified
Recommended from our members
Grounded metacognitive architectures for machine consciousness
Multiple approaches to machine consciousness emphasise the importance of metacognitive states and processes. A considerable num- ber of cognitive systems researchers prefer architectures that are not classically symbolic, and in which learning, rather a priori structure, is central. But it is unclear how these grounded architectures can support metacognition of the required sort. To investigate this possibility, a basic design sketch of such an architecture is presented
Recommended from our members
The physical mandate for belief-goal psychology
This article describes a heuristic argument for understanding certain physical systems in terms of properties that resemble the beliefs and goals of folk psychology. The argument rests on very simple assumptions. The core of the argument is that predictions about certain events can legitimately be based on assumptions about later events, resembling Aristotelian ‘final causation’; however, more nuanced causal entities (resembling fallible beliefs) must be introduced into these types of explanation in order for them to remain consistent with a causally local Universe
Recommended from our members
Functionalism, revisionism, and qualia
From the editor's introduction: "Ron Chrisley and Aaron Sloman open Part I of this issue with their article “Functionalism, Revisionism, and Qualia.” Chrisley and Sloman discuss revisionism about qualia—the view that tries to navigate between naïve qualia realism and reductive eliminativism. The authors discuss the relevance of their approach to AI. They also relate to the works they view as following the main tenets of revisionism about qualia. This includes Gilbert Harman’s version of functionalism, discussed in much detail (including Harman’s article “Explaining the Explanatory Gap,” published in the spring 2007 issue of this newsletter) and also the psychomotoric approach to qualia by Kevin O’Regan.
A human-centered approach to AI ethics: a perspective from cognitive science
This chapter explores a human-centered approach to AI and robot ethics. It demonstrates how a human-centered approach can resolve some problems in AI and robot ethics that arise from the fact that AI systems and robots have cognitive states, and yet have no welfare, and are not responsible. In particular, the approach allows that violence toward robots can be wrong even if robots cannot be harmed. More importantly, the approach encourages people to shift away from designing robots as if they were human ethical deliberators. Ultimately, the cognitive states of AI systems and robots may have a role to play in the proper ethical analysis of situations involving them, even if it is not by virtue of conferring welfare or responsibilities on those systems or robots
Prevailing theories of consciousness are challenged by novel cross-modal associations acquired between subliminal stimuli
While theories of consciousness differ substantially, the ‘conscious access hypothesis’, which aligns consciousness with the global accessibility of information across cortical regions, is present in many of the prevailing frameworks. This account holds that consciousness is necessary to integrate information arising from independent functions such as the specialist processing required by different senses. We directly tested this account by evaluating the potential for associative learning between novel pairs of subliminal stimuli presented in different sensory modalities. First, pairs of subliminal stimuli were presented and then their association assessed by examining the ability of the first stimulus to prime classification of the second. In Experiments 1-4 the stimuli were word-pairs consisting of a male name preceding either a creative or uncreative profession. Participants were subliminally exposed to two name-profession pairs where one name was paired with a creative profession and the other an uncreative profession. A supraliminal task followed requiring the timed classification of one of those two professions. The target profession was preceded by either the name with which it had been subliminally paired (concordant) or the alternate name (discordant). Experiment 1 presented stimuli auditorily, Experiment 2 visually, and Experiment 3 presented names auditorily and professions visually. All three experiments revealed the same inverse priming effect with concordant test pairs associated with significantly slower classification judgements. Experiment 4 sought to establish if learning would be more efficient with supraliminal stimuli and found evidence that a different strategy is adopted when stimuli are consciously perceived. Finally, Experiment 5 replicated the unconscious cross-modal association achieved in Experiment 3 utilising non-linguistic stimuli. The results demonstrate the acquisition of novel cross-modal associations between stimuli which are not consciously perceived and thus challenge the global access hypothesis and those theories embracing it
Why Everything Doesn't Realize Every Computation
Some have suggested that there is no fact to the matter as to whether or not a particular physical system realizes a particular computational description. This suggestion has been taken to imply that computational states are not "real", and cannot, for example, provide a foundation for the cognitive sciences. In particular, Putnam has argued that every ordinary open physical system realizes every abstract finite automaton, implying that the fact that a particular computational characterization applies to a physical system does not tell one anything about the nature of that system. Putnam's argument is scrutinized, and found inadequate because, among other things, it employs a notion of causation that is too weak. I argue that if one's view of computation involves embeddedness (inputs and outputs) and full causality, one can avoid the universal realizability results. Therefore, the fact that a particular system realizes a particular automaton is not a vacuous one, and is often explanatory. Furthermore, I claim that computation would not necessarily be an explanatorily vacuous notion even if it were universally realizable. Key words. Computation, philosophy of computation, embeddedness, foundations of cognitive science, formality, multiple realization
Abstract of "Two problems for Higher-Order Thought Theories of Consciousness"
Higher-order thought theories of consciousness (HOT theories) claim that one is in a mental state M consciously iff: 1) one has the thought that one is in M and 2) one actually is in M. I argue that such theories face two related problems, the Modal Problem and the Poor Theory Problem. I show that, that according to HOT theories, it is an empirical possibility that I do not have any conscious mental states. This is the Modal Problem: HOT theories cannot be correct, because it is in fact empirically impossible that I do not have any conscious mental states. The Poor Theory Problem is that HOT theories, I argue, demand that we deny consciousness to any person whose theory of mind is incorrect to the point of failing to refer to their actual mental states. Yet it seems wrong to deny consciousness to someone just because they have an incorrect theory of mind. Insofar as perception is conceptual, higher-order perception theories of consciousness (HOP theories) fall afoul of the same problems. I consider whether a variant of higher-order theories involving non-conceptual representation of one's own mental states might avoid these problems
- …
