201 research outputs found
Teacher Educators and the Practical Component In Teacher Education
The aim of the research was to identify the characteristics of the roles and work of teacher educators responsible for the practical component in teacher education. The research was conducted as a narrative case study by collecting the narratives of a veteran teacher educator who served as pedagogical instructor, responsible for the teaching practice. The conclusions call for pedagogical instructors to form solid professional world-views in order to ensure that their performance will be professional; they should possess specific skills, engage in perpetual study, and demonstrate a mastery of the latest pedagogical instruction theories. Furthermore, they should function competently in two diverse work environments, namely, the school, which is the practice field, and the teacher education institution. These characteristics imbue pedagogical instruction with the status of a profession, and the teacher educators with the status of expert professionals. Keywords: teacher education, practice, pedagogical instruction, teacher educato
The Nature of Physical Computation
Computing systems are everywhere today. Even the brain is thought to be a sort of computing system. But what does it mean to say that a given organ or system computes? What is it about laptops, smartphones, and nervous systems that they are deemed to compute, and why does it seldom occur to us to describe stomachs, hurricanes, rocks, or chairs that way? The book provides an extended argument for the semantic view of computation, which states that semantic properties are involved in the nature of computing systems. Laptops, smartphones, and nervous systems compute because they are accompanied by representations. Stomachs, hurricanes, and rocks, for instance, which do not have semantic properties, do not compute. The first part of the book argues that the linkage between the mathematical theory of computability and the notion of physical computation is weak. Theoretical notions such as algorithms, effective procedure, program, and automaton play only a minor role in identifying physical computation. The second part of the book reviews three influential accounts of physical computation and argues that while none of these accounts is satisfactory, each of them highlights certain key features of physical computation. The final part of the book develops and argues for a semantic account of physical computation and offers a characterization of computational explanations
Cognitive Computation sans Representation
The Computational Theory of Mind (CTM) holds that cognitive processes are essentially computational, and hence computation provides the scientific key to explaining mentality. The Representational Theory of Mind (RTM) holds that representational content is the key feature in distinguishing mental from non-mental systems. I argue that there is a deep incompatibility between these two theoretical frameworks, and that the acceptance of CTM provides strong grounds for rejecting RTM. The focal point of the incompatibility is the fact that representational content is extrinsic to formal procedures as such, and the intended interpretation of syntax makes no difference to the execution of an algorithm. So the unique 'content' postulated by RTM is superfluous to the formal procedures of CTM. And once these procedures are implemented in a physical mechanism, it is exclusively the causal properties of the physical mechanism that are responsible for all aspects of the system's behaviour. So once again, postulated content is rendered superfluous. To the extent that semantic content may appear to play a role in behaviour, it must be syntactically encoded within the system, and just as in a standard computational artefact, so too with the human mind/brain - it's pure syntax all the way down to the level of physical implementation. Hence 'content' is at most a convenient meta-level gloss, projected from the outside by human theorists, which itself can play no role in cognitive processing
Computation in Physical Systems: A Normative Mapping Account
The relationship between abstract formal procedures and the activities of actual physical systems has proved to be surprisingly subtle and controversial, and there are a number of competing accounts of when a physical system can be properly said to implement a mathematical formalism and hence perform a computation. I defend an account wherein computational descriptions of physical systems are high-level normative interpretations motivated by our pragmatic concerns. Furthermore, the criteria of utility and success vary according to our diverse purposes and pragmatic goals. Hence there is no independent or uniform fact to the matter, and I advance the ‘anti-realist’ conclusion that computational descriptions of physical systems are not founded upon deep ontological distinctions, but rather upon interest-relative human conventions. Hence physical computation is a ‘conventional’ rather than a ‘natural’ kind
Integrating computation into the mechanistic hierarchy in the cognitive and neural sciences
It is generally accepted that, in the cognitive sciences, there are both computational and mechanistic explanations. We ask how computational explanations can integrate into the mechanistic hierarchy. The problem stems from the fact that implementation and mechanistic relations have different forms. The implementation relation, from the states of an abstract computational system (e.g., an automaton) to the physical, implementing states is a homomorphism mapping relation. The mechanistic relation, however, is that of part/whole; the explanans in a mechanistic explanation are components of the explanandum phenomenon. Moreover, each component in one level of mechanism is constituted and explained by components of an underlying level of mechanism. Hence, it seems, computational variables and functions cannot be mechanistically explained by the medium-dependent properties that implement them. How then, do the computational and implementational properties integrate to create the mechanistic hierarchy? After explicating the general problem (section 2), we further demonstrate it through a concrete example, of reinforcement learning, in cognitive neuroscience (sections 3 and 4). We then examine two possible solutions (section 5). On one solution, the mechanistic hierarchy embeds at the same levels computational and implementational properties. This picture fits with the view that computational explanations are mechanism sketches. On the other solution, there are two separate hierarchies, one computational and another implementational, which are related by the implementation relation. This picture fits with the view that computational explanations are functional and autonomous explanations. It is less clear how these solutions fit with the view that computational explanations are full-fledged mechanistic explanations. Finally, we argue that both pictures are consistent with the reinforcement learning example, but that scientific practice does not align with the view that computational models are merely mechanistic sketches (section 6)
Integrating computation into the mechanistic hierarchy in the cognitive and neural sciences
It is generally accepted that, in the cognitive sciences, there are both computational and mechanistic explanations. We ask how computational explanations can integrate into the mechanistic hierarchy. The problem stems from the fact that implementation and mechanistic relations have different forms. The implementation relation, from the states of an abstract computational system (e.g., an automaton) to the physical, implementing states is a homomorphism mapping relation. The mechanistic relation, however, is that of part/whole; the explanans in a mechanistic explanation are components of the explanandum phenomenon. Moreover, each component in one level of mechanism is constituted and explained by components of an underlying level of mechanism. Hence, it seems, computational variables and functions cannot be mechanistically explained by the medium-dependent properties that implement them. How then, do the computational and implementational properties integrate to create the mechanistic hierarchy? After explicating the general problem (section 2), we further demonstrate it through a concrete example, of reinforcement learning, in cognitive neuroscience (sections 3 and 4). We then examine two possible solutions (section 5). On one solution, the mechanistic hierarchy embeds at the same levels computational and implementational properties. This picture fits with the view that computational explanations are mechanism sketches. On the other solution, there are two separate hierarchies, one computational and another implementational, which are related by the implementation relation. This picture fits with the view that computational explanations are functional and autonomous explanations. It is less clear how these solutions fit with the view that computational explanations are full-fledged mechanistic explanations. Finally, we argue that both pictures are consistent with the reinforcement learning example, but that scientific practice does not align with the view that computational models are merely mechanistic sketches (section 6)
From public service broadcasting towards soci(et)al TV. Producers’ perceptions of interactivity and audience participation in Finland and Israel
In a changing media environment, television is being transformed by the adoption of practicessuch as audience participation and interactivity. This article analyses the ways in whichmanagers and producers in Finnish and Israeli public service and hybrid television companiesperceive participation and interactivity. We suggest that while these concepts can be describedby hybrid broadcasters using the technologically- and commercially-oriented conceptof ‘social TV’, the term does not adequately address the perceptions of socially-orientedpublic service broadcasters (PSBs). Hence, we propose the society- and value-orientedconcept of ‘soci(et)al TV’ in an effort to conceptualise the PSBs’ perceptions concerningthe adoption of interactivity and participation practices while they seek to fulfil their socialcommitments and objectives. Our argument is based on a comparative study of two differentbroadcasting models (public service vs. hybrid) in two national media systems and cultures.</p
On Two Different Kinds of Computational Indeterminacy
It is often indeterminate what function a given computational system computes. This phenomenon has been referred to as “computational indeterminacy” or “multiplicity of computations”. In this paper, we argue that what has typically been considered and referred to as the (unique) challenge of computational indeterminacy in fact subsumes two distinct phenomena, which are typically bundled together and should be teased apart. One kind of indeterminacy concerns a functional (or formal) characterization of the system’s relevant behavior (briefly: how its physical states are grouped together and corresponded to abstract states). Another kind concerns the manner in which the abstract (or computational) states are interpreted (briefly: what function the system computes). We discuss the similarities and differences between the two kinds of computational indeterminacy, their implications for certain accounts of “computational individuation” in the literature, and their relevance to different levels of description within the computational system. We also examine the interrelationships between our proposed accounts of the two kinds of indeterminacy and the main accounts of “computational implementation”
The active inference approach to ecological perception: general information dynamics for natural and artificial embodied cognition
The emerging neurocomputational vision of humans as embodied, ecologically embedded, social agents—who shape and are shaped by their environment—offers a golden opportunity to revisit and revise ideas about the physical and information-theoretic underpinnings of life, mind, and consciousness itself. In particular, the active inference framework (AIF) makes it possible to bridge connections from computational neuroscience and robotics/AI to ecological psychology and phenomenology, revealing common underpinnings and overcoming key limitations. AIF opposes the mechanistic to the reductive, while staying fully grounded in a naturalistic and information-theoretic foundation, using the principle of free energy minimization. The latter provides a theoretical basis for a unified treatment of particles, organisms, and interactive machines, spanning from the inorganic to organic, non-life to life, and natural to artificial agents. We provide a brief introduction to AIF, then explore its implications for evolutionary theory, ecological psychology, embodied phenomenology, and robotics/AI research. We conclude the paper by considering implications for machine consciousness
- …
