16 research outputs found

    Comparison and Mapping Facilitate Relation Discovery and Predication

    Get PDF
    Relational concepts play a central role in human perception and cognition, but little is known about how they are acquired. For example, how do we come to understand that physical force is a higher-order multiplicative relation between mass and acceleration, or that two circles are the same-shape in the same way that two squares are? A recent model of relational learning, DORA (Discovery of Relations by Analogy; Doumas, Hummel & Sandhofer, 2008), predicts that comparison and analogical mapping play a central role in the discovery and predication of novel higher-order relations. We report two experiments testing and confirming this prediction

    Tensors and compositionality in neural systems

    Get PDF
    Neither neurobiological nor process models of meaning composition specify the operator through which constituent parts are bound together into compositional structures. In this paper, we argue that a neurophysiological computation system cannot achieve the compositionality exhibited in human thought and language if it were to rely on a multiplicative operator to perform binding, as the tensor product (TP)-based systems that have been widely adopted in cognitive science, neuroscience and artificial intelligence do. We show via simulation and two behavioural experiments that TPs violate variable-value independence, but human behaviour does not. Specifically, TPs fail to capture that in the statements fuzzy cactus and fuzzy penguin, both cactus and penguin are predicated by fuzzy(x) and belong to the set of fuzzy things, rendering these arguments similar to each other. Consistent with that thesis, people judged arguments that shared the same role to be similar, even when those arguments themselves (e.g., cacti and penguins) were judged to be dissimilar when in isolation. By contrast, the similarity of the TPs representing fuzzy(cactus) and fuzzy(penguin) was determined by the similarity of the arguments, which in this case approaches zero. Based on these results, we argue that neural systems that use TPs for binding cannot approximate how the human mind and brain represent compositional information during processing. We describe a contrasting binding mechanism that any physiological or artificial neural system could use to maintain independence between a role and its argument, a prerequisite for compositionality and, thus, for instantiating the expressive power of human thought and language in a neural system

    What does memory retrieval leave on the table? Modelling the Cost of Semi-Compositionality with MINERVA2 and sBERT

    No full text
    Despite being ubiquitous in natural language, collocations (e.g., kick+habit) incur a unique processing cost, compared to compositional phrases (kick+door) and idioms (kick+bucket). We confirm this cost with behavioural data as well as MINERVA2, a memory model, suggesting that collocations constitute a distinct linguistic category. While the model fails to fully capture the observed human processing patterns, we find that below a specific item frequency threshold, the model’s retrieval failures align with human reaction times across conditions. This suggests an alternative processing mechanism that activates when memory retrieval fails

    Computational perspectives on cognitive development

    No full text
    This article reviews the efforts to develop process models of infants' and children's cognition. Computational process models provide a tool for elucidating the causal mechanisms involved in learning and development. The history of computational modeling in developmental psychology broadly follows the same trends that have run throughout cognitive science—including rule‐based models, neural network (connectionist) models, ACT‐R models, ART models, decision tree models, reinforcement learning models, and hybrid models among others
    corecore