28 research outputs found
Recommended from our members
Generalization of within-category feature correlations
Theoretical and empirical work in the field of classificationlearning is centered on a ‘reference point’ view, where learn-ers are thought to represent categories in terms of stored pointsin psychological space (e.g., prototypes, exemplars, clusters).Reference point representations fully specify how regions ofpsychological space are associated with class labels, but theydo not contain information about how features relate to oneanother (within- class or otherwise). We present a novel exper-iment suggesting human learners acquire knowledge of within-class feature correlations and use this knowledge during gen-eralization. Our methods conform strictly to the traditional ar-tificial classification learning paradigm, and our results can-not be explained by any prominent reference point model (i.e.,GCM, ALCOVE). An alternative to the reference point frame-work (DIVA) provides a strong account of the observed perfor-mance. We additionally describe preliminary work on a noveldiscriminative clustering model that also explains our results
Recommended from our members
Linear separability and human category learning: Revisiting a classic study
The ability to acquire non-linearly separable (NLS) classifications is well documented in the study of human categorylearning. In particular, one experiment (Medin & Schwanenflugel, 1981; E4) is viewed as the canonical demonstration that,when within- and between- category similarities are evenly matched, NLS classifications are not more difficult to acquire thanlinearly separable ones. The results of this study are somewhat at issue due to non-standard methodology and small samplesize. We present a replication and extension of this classic experiment. We did not find any evidence of an advantage forlinearly separable classifications. In fact, the marginal NLS advantage observed in the original study was strengthened: wefound a significant advantage for the NLS classification. These results are discussed with respect to accounts provided byformal models of human classification learning
Recommended from our members
Switch it up: Learning Categories via Feature Switching
This research introduces the switch task, a novel learning modethat fits with calls for a broader explanatory account of hu-man category learning (Kurtz, 2015; Markman & Ross, 2003;Murphy, 2002). Learning with the switch task is a processof turning each presented exemplar into a member of anotherdesignated category. This paper presents the switch task to fur-ther explore the contingencies between learning goals, learn-ing modes, outcomes, and category representations. The pro-cess of successfully transforming exemplars into members of atarget category requires generative knowledge such as within-category feature correspondences – similar to inference learn-ing. Given that the ability to switch items between categoriesnicely encapsulates category knowledge, how does this relateto more familiar tasks like inferring features and classifyingexemplars? To address this question we present an empiri-cal investigation of this new task, side-by-side with the well-established alternative of classification learning. The resultsshow that the category knowledge acquired through switchlearning shares similarities with inference learning and pro-vides insight into the processes at work. The implications ofthis research, particularly the distinctions between this learn-ing mode and well-known alternatives, are discussed
Recommended from our members
Exemplar models can’t see the forest for the trees
We investigated human learning and generalization of three novel category structures based on eight exemplars in
a continuous (9x9) stimulus space. Each category requires attention to both dimensions, but they differ in their organization.
Critically, all three category types are matched on within- and between-category exemplar distances. The first category structure
conforms to a condensation or information-integration type of problem with two classes separable by a diagonal bound. The
other category structures cannot be solved with a linear decision boundary. We found that learners trained on the diagonal
bound structure showed significantly better learning and generalization performance. In computational simulations, we found
that an exemplar model (ALCOVE) could not account for the observed pattern. We posit that ALCOVE is constrained by the
matched distances to learn these category structures at the same speed. Another similarity-based model with different basic
design principles (DIVA) provided a good account of the behavioral data
Recommended from our members
A Dissociation between Categorization and Similarity to Exemplars
Research in category learning has been dominated by a
‘reference point’ view in which items are classified based on
attention-weighted similarity to reference points (e.g.,
prototypes, exemplars, clusters) in a multidimensional space.
Although much work has attempted to distinguish between
particular types of reference point models, they share a core
design principle that items will be classified as belonging to
the category of the most proximal reference point(s). In this
paper, we present an original experiment challenging this
distance assumption. After classification training on a
modified XOR category structure, we find that many learners
generalize their category knowledge to novel exemplars in a
manner that violates the distance assumption. This pattern of
performance reveals a fundamental limitation in the reference
point framework and suggests that stimulus generalization is
not a reliable foundation for explaining human category
learning
Solving Nonlinearly Separable Classifications in a Single-Layer Neural Network
Since the work of Minsky and Papert ( 1969 ), it has been understood that single-layer neural networks cannot solve nonlinearly separable classifications (i.e., XOR). We describe and test a novel divergent autoassociative architecture capable of solving nonlinearly separable classifications with a single layer of weights. The proposed network consists of class-specific linear autoassociators. The power of the model comes from treating classification problems as within-class feature prediction rather than directly optimizing a discriminant function. We show unprecedented learning capabilities for a simple, single-layer network (i.e., solving XOR) and demonstrate that the famous limitation in acquiring nonlinearly separable problems is not just about the need for a hidden layer; it is about the choice between directly predicting classes or learning to classify indirectly by predicting features. </jats:p
